AIPGF Practitioner: Values, Behaviours, and Culture

Try 10 focused AIPGF Practitioner questions on Values, Behaviours, and Culture, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Practitioner
Topic areaValues, Behaviours, and Culture
Blueprint weight12%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Values, Behaviours, and Culture for AIPGF Practitioner. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 12% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Values, Behaviours, and Culture

A retail bank’s customer-complaints team wants to use a third-party GenAI tool to “see what themes emerge” from 18 months of call transcripts. The product manager frames it as curiosity-driven exploration, but the transcripts include names, account references, and sensitive vulnerability notes. The vendor tool is not yet onboarded to the bank’s approved tool list, and no audit trail for prompts/outputs has been agreed. The team says they need insights in 2 weeks.

Which governance action best resolves the value conflict responsibly?

  • A. Proceed with the vendor tool using least-privilege access and remind staff not to paste secrets
  • B. Require an AI Assistance Plan and run exploration only on de-identified/synthetic data in an approved environment before any vendor use
  • C. Allow the upload of full transcripts but add a disclaimer that outputs are advisory and must be reviewed
  • D. Delay all analysis until after a full pilot delivers measurable benefits, then seek approvals

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: This is a curiosity-versus-risk conflict where sensitive data and weak auditability make “quick exploration” unsafe. The responsible resolution is to keep the learning objective but change the method: document intended AI use, assign accountability, and use de-identified/synthetic data in an approved setting before considering any third-party tool handling real transcripts. This aligns with Human-Centricity and Transparency while enabling progress.

Value conflicts often arise when teams want rapid insight (curiosity, speed, innovation) but the context raises risk (sensitive data, trust, auditability). Here, the decisive factor is that exploratory use would expose highly sensitive customer information to a non-onboarded vendor tool without agreed logging, which undermines accountability and transparency.

A proportionate AIPGF response is to preserve the goal (finding themes) while reducing risk and increasing governability:

  • Capture purpose, scope, data types, roles, and controls in an AI Assistance Plan
  • Use de-identified or synthetic transcripts (or an approved internal environment) for exploration
  • Only consider vendor processing after governance approval and audit/logging requirements are met

The key takeaway is to redirect experimentation into a controlled, auditable pathway rather than “learning first, govern later.”

It preserves learning value while reducing harm by preventing sensitive data exposure and establishing accountable, auditable conditions for AI use.


Question 2

Topic: Values, Behaviours, and Culture

A claims operations team is piloting a GenAI assistant that drafts claim summaries inside their case management workflow. Adoption is low and users report they do not trust the outputs.

Exhibit: Activation gate checklist (excerpt)

Activation Gate — GenAI Claims Assistant (pilot)
- Data sources approved & documented? NO (ad hoc CSV exports)
- Data quality checks in place? NO (3% record mismatch in pilot)
- Integration security review passed? NO (unapproved browser plug-in)
- User feedback: "We retype values to be safe"; adoption 18%
Decision: Gate owner requests action before scale-up

Based on the exhibit, what is the best next governance action?

  • A. Stop scale-up, assign accountable owners to fix data quality and approved integrations, then re-run the gate with evidence
  • B. Proceed to scale-up and focus on change management training to overcome resistance
  • C. Approve the browser plug-in as a temporary exception to reduce manual retyping and improve adoption
  • D. Remove human review steps to increase throughput and revisit data issues after users adopt the tool

Best answer: A

What this tests: Values, Behaviours, and Culture

Explanation: The exhibit shows core blockers to responsible AI use: undocumented data sourcing, known data mismatches, and an unapproved integration that has not passed security review. In AIPGF terms, the Activation gate should not be passed until these issues are resolved with clear accountabilities and objective evidence. Addressing these root causes is also the most direct way to reduce resistance driven by low trust and rework.

This is an Activation-stage governance decision: the gate checklist is evidence that foundational controls for responsible use are missing. When data provenance/quality and tool integration assurance are not in place, users compensate (retyping “to be safe”), which drives resistance and low adoption.

The appropriate action is to hold scale-up and remediate the blockers by:

  • assigning accountable owners for data approval/lineage and data quality checks
  • routing the integration through the approved security/architecture path (or choosing an approved integration)
  • updating the AI Assistance Plan/decision log with the new controls and re-running the gate with evidence

Training and incentives can help adoption, but they cannot substitute for fixing the underlying data and integration risks that undermine trust.

The gate explicitly fails on data provenance/quality and unapproved integration, so responsible use requires remediation and re-approval before scaling.


Question 3

Topic: Values, Behaviours, and Culture

A product team is rolling out a GenAI assistant to help customer-service agents draft replies. You are asked to review four draft “team behaviours” statements intended to reinforce the AIPGF Core Values (Accountability, Sensibility, Collaboration, Curiosity, Continuous Improvement).

Which statement is INCORRECT and should be removed because it undermines the purpose of the AIPGF Core Values?

  • A. Invite frontline feedback on AI responses and prompt improvements.
  • B. Assign decision owner for AI outputs and keep decision log.
  • C. Prioritize speed; avoid challenging outputs or documenting rationale.
  • D. Hold cross-functional reviews to balance benefit, risk, and usability.

Best answer: C

What this tests: Values, Behaviours, and Culture

Explanation: The AIPGF Core Values exist to shape day-to-day behaviours that make AI delivery trustworthy, auditable, and continuously learnable. A statement that discourages challenging AI outputs and avoids documenting rationale directly weakens accountability, sensible judgment, and learning. This is an AI governance anti-pattern because it increases the chance of unchecked errors and reduces transparency of decisions.

The AIPGF Core Values are meant to create a culture where AI-assisted work is delivered responsibly and can be trusted and improved over time. In this scenario, the behaviours should reinforce: clear ownership and decision rights (Accountability), context-appropriate judgment and proportional controls (Sensibility), shared responsibility across disciplines (Collaboration), permission to question outputs and learn (Curiosity), and ongoing refinement based on evidence (Continuous Improvement).

A “don’t challenge outputs” and “don’t document rationale” stance blocks inquiry, removes decision traceability, and makes it harder to detect, explain, and correct AI-related issues. The key takeaway is that speed goals must be balanced with responsible, explainable, and improvable ways of working.

It suppresses questioning and traceability, conflicting with Curiosity, Accountability, and sensible governance for trustworthy outcomes.


Question 4

Topic: Values, Behaviours, and Culture

A service team is in the Activation stage building a GenAI-assisted workflow to draft responses to customer complaints. To “move fast,” the project lead decides not to set any team norms for responsible AI use (no confidentiality rules, no prompt hygiene guidance, and no peer review of AI-assisted text), and staff paste full complaint emails (names, addresses, account details) into a public GenAI tool.

What is the most likely near-term impact of this governance omission?

  • A. Faster value realization because outputs need less human review
  • B. Higher confidentiality exposure and an immediate auditability gap
  • C. Loss of customer trust that becomes visible next year
  • D. Lower model risk as the tool learns from real complaints

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: Skipping team norms for confidentiality, prompt hygiene, and review creates immediate operational risk: sensitive customer data may be disclosed to an uncontrolled external system. It also reduces transparency and auditability because the team cannot demonstrate what was shared, how outputs were checked, or who approved AI-assisted deliverables.

Establishing team norms for responsible AI use is a practical control that operationalizes Human-Centricity and Transparency during delivery. In this scenario, the omission directly affects two near-term outcomes: (1) confidentiality risk, because staff are sharing identifiable customer information with a public tool, and (2) auditability, because there is no consistent practice to review, record, and evidence AI-assisted work.

Effective norms typically include:

  • Confidentiality rules (what data can/cannot enter prompts)
  • Prompt hygiene (redaction, minimization, approved templates)
  • HITL review and sign-off for AI-assisted text
  • Basic logging (what was used, by whom, for what purpose)

The key takeaway is that missing norms usually shows up quickly as uncontrolled data handling and weak evidence for assurance, not as delayed strategic effects.

Without confidentiality and prompt hygiene norms, sensitive data can be exposed immediately and the team cannot evidence controlled, reviewed AI use.


Question 5

Topic: Values, Behaviours, and Culture

A bank is rolling out a GenAI drafting assistant for customer support agents. Adoption is slowing because agents fear being monitored and some supervisors want to “move fast” to hit a quarterly target. Internal audit requires an approval trail for changes to prompts/guardrails and evidence that Human-in-the-Loop checks remain in place for customer-facing messages.

Which change-management action SHOULD AVOID?

  • A. Hold agent co-design sessions to refine workflows and guardrails
  • B. Use a fast-track change route with documented decision rights
  • C. Provide role-based training plus clear do/don’t usage guidance
  • D. Permit temporary use of unapproved GenAI tools to maintain momentum

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: Maintaining momentum in AI-enabled change means reducing friction while keeping governance controls intact. Allowing unapproved tools creates unmanaged, unauditable AI use and undermines the required approval trail and Human-in-the-Loop safeguards. The better approach is to accelerate through controlled pathways and focus on adoption enablers.

The core change-management goal in AIPGF is to sustain adoption while preserving governance controls (decision rights, audit trail, and HITL where required). In this scenario, resistance is real, but the bank’s constraints are explicit: prompt/guardrail changes must be approved and traceable, and customer-facing outputs must retain HITL. Encouraging or tolerating “temporary” unapproved GenAI use breaks Transparency (no reliable decision logging/evidence), weakens Human-Centricity (users are pushed into unmanaged risk), and erodes control effectiveness.

Practical momentum-preserving actions that still respect controls include:

  • Co-design and targeted communications to address fear and clarify intent
  • Training and clear usage boundaries (what is allowed, when to escalate)
  • Fast-track change control that still records approvals and rationale

Key takeaway: speed should be achieved by streamlining controlled governance, not bypassing it.

It creates shadow AI use that bypasses required controls, auditability, and HITL assurances.


Question 6

Topic: Values, Behaviours, and Culture

Your service desk project is in Activation. So far, staff have used a GenAI assistant only to draft customer emails, with humans sending all responses.

The vendor releases an upgrade: an “auto-resolve” agent that can execute actions in the ticketing system (reset passwords and issue refunds up to $50) based on chat context. The product owner wants to enable it this sprint to reduce backlog.

Which governance action is the best AIPGF-aligned adaptation to this tool change?

  • A. Publish an internal transparency note explaining the upgrade and remind staff to watch for hallucinations
  • B. Rely on the vendor’s assurance pack and proceed, since the agent’s refund limit is low
  • C. Defer changes until Evaluation and focus this sprint on team culture workshops about responsible AI
  • D. Re-tailor governance for the new autonomy, update the AI Assistance Plan and RACI, and require HITL approval for agent-executed actions before go-live

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: The decisive change is the move from AI-assisted drafting to agentic execution in an operational system. AIPGF’s pragmatic sensibility and Adaptability principle require re-tailing governance when tooling capabilities change, not just communicating or training. Updating the AI Assistance Plan and decision rights, and introducing appropriate HITL control, provides defensible go/no-go evidence for Activation.

When an AI tool shifts from supporting people to taking actions in systems, the project’s governance needs change because accountability and risk exposure change. Under AIPGF, you adapt governance pragmatically by treating this as a material change: re-check the tailored controls, clarify who is accountable for outcomes, and set boundaries for autonomy.

A proportionate adaptation in Activation typically includes:

  • Updating the AI Assistance Plan to reflect the new capability, limits, and monitoring
  • Updating RACI/decision rights for agent-triggered actions (e.g., refunds)
  • Adding HITL approval or step-up controls (e.g., approve before execution; tighter thresholds)
  • Recording the decision and re-confirming go/no-go evidence for the changed scope

This is stronger than “communicate and train” because it changes the control environment to match the new operational behavior.

The upgrade materially changes decision execution risk, so governance must be re-tailored with explicit decision rights, HITL controls, and updated evidence before activation.


Question 7

Topic: Values, Behaviours, and Culture

In the APMG AI Project Governance Framework (AIPGF) Core Values, what best describes Curiosity?

  • A. Exploring AI opportunities through controlled experiments with clear guardrails
  • B. Avoiding AI adoption until all risks can be eliminated
  • C. Making AI use visible and explainable to stakeholders
  • D. Changing governance controls frequently to match new technologies

Best answer: A

What this tests: Values, Behaviours, and Culture

Explanation: In AIPGF, Curiosity is about learning and innovation, but done responsibly. It emphasizes controlled experimentation—such as pilots or sandboxes—within defined guardrails so teams can explore value while managing risk and accountability.

Curiosity, as an AIPGF Core Value, encourages teams to seek AI-enabled improvements and test new ideas, while keeping experimentation governed and proportionate to the risk. It is not “try anything fast”; it is deliberate learning with boundaries.

In practice this typically means:

  • defining the hypothesis and success criteria
  • using a limited-scope pilot/sandbox and appropriate controls
  • capturing outcomes and decisions (so learning is reusable and auditable)
  • scaling only when evidence supports benefit and acceptable risk

The key takeaway is that Curiosity combines innovation with disciplined, safe-to-fail experimentation rather than unmanaged exploration or avoidance.

Curiosity is responsible exploration of AI innovations via safe-to-fail, governed experimentation that generates learning without unmanaged risk.


Question 8

Topic: Values, Behaviours, and Culture

You are the project manager in the Activation stage for a GenAI-assisted customer-support knowledge base refresh. The sponsor wants a board-ready summary of “top 10 customer pain points and recommended fixes” in 48 hours, using ticket text that may include personal data and sarcasm.

You plan to use GenAI to cluster themes and draft the summary. Which approach best demonstrates sensibility behaviours while balancing speed, ethics, and required governance controls?

  • A. Publish the GenAI summary to meet the deadline
  • B. Pause delivery until a full external audit is completed
  • C. Ban GenAI and do manual thematic analysis only
  • D. Draft with GenAI, then validate sources and SME-review; log use

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: The best balance is to use GenAI for acceleration while explicitly sense-checking outputs before they shape decisions. That means HITL review by domain owners, validation against authoritative data, and documenting AI use and limitations for transparency and auditability. This applies Human-Centricity and Transparency without creating unnecessary delay.

Sensibility behaviours in AI-assisted delivery mean you treat AI output as a draft hypothesis, not an answer, and you actively reduce overconfidence risk. In this scenario, the deliverable will influence prioritisation and may be based on messy, sensitive ticket text, so governance must ensure accuracy, bias awareness, and traceability while still meeting the 48-hour constraint.

A practical, stage-appropriate approach is:

  • Use GenAI to accelerate clustering and drafting.
  • Validate themes against reliable evidence (ticket metrics, known defect tags, sampled raw tickets).
  • Apply HITL review by SMEs/product owners to challenge plausibility and spot hallucinations.
  • Record AI use (AI Assistance Plan/decision log) and communicate limitations for Transparency.

Key takeaway: optimize for trustworthy, explainable output—not just speed or maximal control.

It uses GenAI efficiently but adds HITL validation, provenance checks, and transparent documentation before decisions are influenced.


Question 9

Topic: Values, Behaviours, and Culture

A product team asks you to “sign off the AIPGF pack” for a GenAI feature, and they want to copy last quarter’s templates to save time. The request is vague: they have not clarified how outputs will be used in operations, who will rely on them, or what “good” looks like.

To embrace the spirit of AIPGF (rather than rigidly following templates), what should you ask or obtain FIRST?

  • A. Which model provider and prompt technique they plan to implement
  • B. Which AIPGF template set and version they want you to use
  • C. Intended use, decision authority, and risk tier for the feature
  • D. Whether the pack should be a full document or a slide deck

Best answer: C

What this tests: Values, Behaviours, and Culture

Explanation: Embracing the spirit of AIPGF means starting with context so governance is proportionate and value-protecting, not “checkbox” compliance. Clarifying intended use, who makes/owns decisions, and the project’s risk tier determines which artifacts, HITL controls, and assurance evidence are actually needed. Only then do templates become helpful accelerators rather than a substitute for thinking.

The spirit of AIPGF is pragmatic sensibility: apply principles and core values to the real situation, then tailor governance accordingly. When a request is underspecified, the first move is to clarify how the AI will be used, what decisions it influences, and who is accountable for those decisions (and therefore what risk tier and controls are appropriate). That context drives what “good” means (acceptance criteria), what transparency is required (disclosures and logging), and what human oversight is needed (HITL), rather than blindly populating a template.

A useful first clarification set is:

  • Intended use and user groups (including foreseeable misuse)
  • Decision authority and accountability for AI-assisted outputs
  • Provisional risk tier to right-size gates and evidence

Templates support consistency, but they should follow the context, not replace it.

These context facts let you tailor proportionate controls and evidence, which is the framework’s intent rather than template completion.


Question 10

Topic: Values, Behaviours, and Culture

A product team is under pressure to “use GenAI” to improve customer support response times. Three different groups are pushing different vendor tools, and adoption is stalling because stakeholders remember a recent automation initiative that delivered little value.

As the AIPGF practitioner, what is the best next governance action to avoid tool chasing and regain momentum?

  • A. Implement enhanced audit logging and decision logs for all AI outputs before any further work proceeds
  • B. Run a time-boxed vendor proof-of-concept bake-off and pick the tool with the best demo performance
  • C. Mandate organization-wide GenAI training so teams build confidence and adoption increases
  • D. Create an AI Assistance Plan that defines the objectives, in-scope tasks, success measures, and decision boundaries before selecting a tool

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: The problem is not a lack of tools; it is a lack of clear, shared objectives for AI assistance. An AI Assistance Plan anchors the effort on measurable outcomes, defines where AI is and is not used, and sets boundaries for human accountability. That clarity reduces resistance and stops wasted effort on competing tools.

When adoption stalls and teams argue over tools, the strongest control is to re-anchor the work on agreed outcomes and guardrails. In AIPGF terms, set clear objectives for AI assistance (what work it will help with, what “good” looks like, and what humans remain accountable for) before investing in vendor selection.

A practical AI Assistance Plan should capture:

  • Objectives and measurable success criteria (e.g., response-time reduction, quality targets)
  • In-scope and out-of-scope tasks and data constraints
  • Human-in-the-loop decision boundaries and approvals
  • How benefits and unintended impacts will be tracked

Tool evaluation then becomes evidence-based against the agreed objectives, which helps rebuild trust and reduces resistance by showing purposeful, human-centric use of AI rather than experimentation for its own sake.

Clarifying outcome-based objectives and measurable success criteria first prevents tool chasing and provides a shared basis for adoption decisions.

Continue with full practice

Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Practitioner guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026