Try 10 focused AIPGF Practitioner questions on Values, Behaviours, and Culture, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | AIPGF Practitioner |
| Topic area | Values, Behaviours, and Culture |
| Blueprint weight | 12% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Values, Behaviours, and Culture for AIPGF Practitioner. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 12% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Values, Behaviours, and Culture
A retail bank’s customer-complaints team wants to use a third-party GenAI tool to “see what themes emerge” from 18 months of call transcripts. The product manager frames it as curiosity-driven exploration, but the transcripts include names, account references, and sensitive vulnerability notes. The vendor tool is not yet onboarded to the bank’s approved tool list, and no audit trail for prompts/outputs has been agreed. The team says they need insights in 2 weeks.
Which governance action best resolves the value conflict responsibly?
Best answer: B
What this tests: Values, Behaviours, and Culture
Explanation: This is a curiosity-versus-risk conflict where sensitive data and weak auditability make “quick exploration” unsafe. The responsible resolution is to keep the learning objective but change the method: document intended AI use, assign accountability, and use de-identified/synthetic data in an approved setting before considering any third-party tool handling real transcripts. This aligns with Human-Centricity and Transparency while enabling progress.
Value conflicts often arise when teams want rapid insight (curiosity, speed, innovation) but the context raises risk (sensitive data, trust, auditability). Here, the decisive factor is that exploratory use would expose highly sensitive customer information to a non-onboarded vendor tool without agreed logging, which undermines accountability and transparency.
A proportionate AIPGF response is to preserve the goal (finding themes) while reducing risk and increasing governability:
The key takeaway is to redirect experimentation into a controlled, auditable pathway rather than “learning first, govern later.”
It preserves learning value while reducing harm by preventing sensitive data exposure and establishing accountable, auditable conditions for AI use.
Topic: Values, Behaviours, and Culture
A claims operations team is piloting a GenAI assistant that drafts claim summaries inside their case management workflow. Adoption is low and users report they do not trust the outputs.
Exhibit: Activation gate checklist (excerpt)
Activation Gate — GenAI Claims Assistant (pilot)
- Data sources approved & documented? NO (ad hoc CSV exports)
- Data quality checks in place? NO (3% record mismatch in pilot)
- Integration security review passed? NO (unapproved browser plug-in)
- User feedback: "We retype values to be safe"; adoption 18%
Decision: Gate owner requests action before scale-up
Based on the exhibit, what is the best next governance action?
Best answer: A
What this tests: Values, Behaviours, and Culture
Explanation: The exhibit shows core blockers to responsible AI use: undocumented data sourcing, known data mismatches, and an unapproved integration that has not passed security review. In AIPGF terms, the Activation gate should not be passed until these issues are resolved with clear accountabilities and objective evidence. Addressing these root causes is also the most direct way to reduce resistance driven by low trust and rework.
This is an Activation-stage governance decision: the gate checklist is evidence that foundational controls for responsible use are missing. When data provenance/quality and tool integration assurance are not in place, users compensate (retyping “to be safe”), which drives resistance and low adoption.
The appropriate action is to hold scale-up and remediate the blockers by:
Training and incentives can help adoption, but they cannot substitute for fixing the underlying data and integration risks that undermine trust.
The gate explicitly fails on data provenance/quality and unapproved integration, so responsible use requires remediation and re-approval before scaling.
Topic: Values, Behaviours, and Culture
A product team is rolling out a GenAI assistant to help customer-service agents draft replies. You are asked to review four draft “team behaviours” statements intended to reinforce the AIPGF Core Values (Accountability, Sensibility, Collaboration, Curiosity, Continuous Improvement).
Which statement is INCORRECT and should be removed because it undermines the purpose of the AIPGF Core Values?
Best answer: C
What this tests: Values, Behaviours, and Culture
Explanation: The AIPGF Core Values exist to shape day-to-day behaviours that make AI delivery trustworthy, auditable, and continuously learnable. A statement that discourages challenging AI outputs and avoids documenting rationale directly weakens accountability, sensible judgment, and learning. This is an AI governance anti-pattern because it increases the chance of unchecked errors and reduces transparency of decisions.
The AIPGF Core Values are meant to create a culture where AI-assisted work is delivered responsibly and can be trusted and improved over time. In this scenario, the behaviours should reinforce: clear ownership and decision rights (Accountability), context-appropriate judgment and proportional controls (Sensibility), shared responsibility across disciplines (Collaboration), permission to question outputs and learn (Curiosity), and ongoing refinement based on evidence (Continuous Improvement).
A “don’t challenge outputs” and “don’t document rationale” stance blocks inquiry, removes decision traceability, and makes it harder to detect, explain, and correct AI-related issues. The key takeaway is that speed goals must be balanced with responsible, explainable, and improvable ways of working.
It suppresses questioning and traceability, conflicting with Curiosity, Accountability, and sensible governance for trustworthy outcomes.
Topic: Values, Behaviours, and Culture
A service team is in the Activation stage building a GenAI-assisted workflow to draft responses to customer complaints. To “move fast,” the project lead decides not to set any team norms for responsible AI use (no confidentiality rules, no prompt hygiene guidance, and no peer review of AI-assisted text), and staff paste full complaint emails (names, addresses, account details) into a public GenAI tool.
What is the most likely near-term impact of this governance omission?
Best answer: B
What this tests: Values, Behaviours, and Culture
Explanation: Skipping team norms for confidentiality, prompt hygiene, and review creates immediate operational risk: sensitive customer data may be disclosed to an uncontrolled external system. It also reduces transparency and auditability because the team cannot demonstrate what was shared, how outputs were checked, or who approved AI-assisted deliverables.
Establishing team norms for responsible AI use is a practical control that operationalizes Human-Centricity and Transparency during delivery. In this scenario, the omission directly affects two near-term outcomes: (1) confidentiality risk, because staff are sharing identifiable customer information with a public tool, and (2) auditability, because there is no consistent practice to review, record, and evidence AI-assisted work.
Effective norms typically include:
The key takeaway is that missing norms usually shows up quickly as uncontrolled data handling and weak evidence for assurance, not as delayed strategic effects.
Without confidentiality and prompt hygiene norms, sensitive data can be exposed immediately and the team cannot evidence controlled, reviewed AI use.
Topic: Values, Behaviours, and Culture
A bank is rolling out a GenAI drafting assistant for customer support agents. Adoption is slowing because agents fear being monitored and some supervisors want to “move fast” to hit a quarterly target. Internal audit requires an approval trail for changes to prompts/guardrails and evidence that Human-in-the-Loop checks remain in place for customer-facing messages.
Which change-management action SHOULD AVOID?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: Maintaining momentum in AI-enabled change means reducing friction while keeping governance controls intact. Allowing unapproved tools creates unmanaged, unauditable AI use and undermines the required approval trail and Human-in-the-Loop safeguards. The better approach is to accelerate through controlled pathways and focus on adoption enablers.
The core change-management goal in AIPGF is to sustain adoption while preserving governance controls (decision rights, audit trail, and HITL where required). In this scenario, resistance is real, but the bank’s constraints are explicit: prompt/guardrail changes must be approved and traceable, and customer-facing outputs must retain HITL. Encouraging or tolerating “temporary” unapproved GenAI use breaks Transparency (no reliable decision logging/evidence), weakens Human-Centricity (users are pushed into unmanaged risk), and erodes control effectiveness.
Practical momentum-preserving actions that still respect controls include:
Key takeaway: speed should be achieved by streamlining controlled governance, not bypassing it.
It creates shadow AI use that bypasses required controls, auditability, and HITL assurances.
Topic: Values, Behaviours, and Culture
Your service desk project is in Activation. So far, staff have used a GenAI assistant only to draft customer emails, with humans sending all responses.
The vendor releases an upgrade: an “auto-resolve” agent that can execute actions in the ticketing system (reset passwords and issue refunds up to $50) based on chat context. The product owner wants to enable it this sprint to reduce backlog.
Which governance action is the best AIPGF-aligned adaptation to this tool change?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: The decisive change is the move from AI-assisted drafting to agentic execution in an operational system. AIPGF’s pragmatic sensibility and Adaptability principle require re-tailing governance when tooling capabilities change, not just communicating or training. Updating the AI Assistance Plan and decision rights, and introducing appropriate HITL control, provides defensible go/no-go evidence for Activation.
When an AI tool shifts from supporting people to taking actions in systems, the project’s governance needs change because accountability and risk exposure change. Under AIPGF, you adapt governance pragmatically by treating this as a material change: re-check the tailored controls, clarify who is accountable for outcomes, and set boundaries for autonomy.
A proportionate adaptation in Activation typically includes:
This is stronger than “communicate and train” because it changes the control environment to match the new operational behavior.
The upgrade materially changes decision execution risk, so governance must be re-tailored with explicit decision rights, HITL controls, and updated evidence before activation.
Topic: Values, Behaviours, and Culture
In the APMG AI Project Governance Framework (AIPGF) Core Values, what best describes Curiosity?
Best answer: A
What this tests: Values, Behaviours, and Culture
Explanation: In AIPGF, Curiosity is about learning and innovation, but done responsibly. It emphasizes controlled experimentation—such as pilots or sandboxes—within defined guardrails so teams can explore value while managing risk and accountability.
Curiosity, as an AIPGF Core Value, encourages teams to seek AI-enabled improvements and test new ideas, while keeping experimentation governed and proportionate to the risk. It is not “try anything fast”; it is deliberate learning with boundaries.
In practice this typically means:
The key takeaway is that Curiosity combines innovation with disciplined, safe-to-fail experimentation rather than unmanaged exploration or avoidance.
Curiosity is responsible exploration of AI innovations via safe-to-fail, governed experimentation that generates learning without unmanaged risk.
Topic: Values, Behaviours, and Culture
You are the project manager in the Activation stage for a GenAI-assisted customer-support knowledge base refresh. The sponsor wants a board-ready summary of “top 10 customer pain points and recommended fixes” in 48 hours, using ticket text that may include personal data and sarcasm.
You plan to use GenAI to cluster themes and draft the summary. Which approach best demonstrates sensibility behaviours while balancing speed, ethics, and required governance controls?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: The best balance is to use GenAI for acceleration while explicitly sense-checking outputs before they shape decisions. That means HITL review by domain owners, validation against authoritative data, and documenting AI use and limitations for transparency and auditability. This applies Human-Centricity and Transparency without creating unnecessary delay.
Sensibility behaviours in AI-assisted delivery mean you treat AI output as a draft hypothesis, not an answer, and you actively reduce overconfidence risk. In this scenario, the deliverable will influence prioritisation and may be based on messy, sensitive ticket text, so governance must ensure accuracy, bias awareness, and traceability while still meeting the 48-hour constraint.
A practical, stage-appropriate approach is:
Key takeaway: optimize for trustworthy, explainable output—not just speed or maximal control.
It uses GenAI efficiently but adds HITL validation, provenance checks, and transparent documentation before decisions are influenced.
Topic: Values, Behaviours, and Culture
A product team asks you to “sign off the AIPGF pack” for a GenAI feature, and they want to copy last quarter’s templates to save time. The request is vague: they have not clarified how outputs will be used in operations, who will rely on them, or what “good” looks like.
To embrace the spirit of AIPGF (rather than rigidly following templates), what should you ask or obtain FIRST?
Best answer: C
What this tests: Values, Behaviours, and Culture
Explanation: Embracing the spirit of AIPGF means starting with context so governance is proportionate and value-protecting, not “checkbox” compliance. Clarifying intended use, who makes/owns decisions, and the project’s risk tier determines which artifacts, HITL controls, and assurance evidence are actually needed. Only then do templates become helpful accelerators rather than a substitute for thinking.
The spirit of AIPGF is pragmatic sensibility: apply principles and core values to the real situation, then tailor governance accordingly. When a request is underspecified, the first move is to clarify how the AI will be used, what decisions it influences, and who is accountable for those decisions (and therefore what risk tier and controls are appropriate). That context drives what “good” means (acceptance criteria), what transparency is required (disclosures and logging), and what human oversight is needed (HITL), rather than blindly populating a template.
A useful first clarification set is:
Templates support consistency, but they should follow the context, not replace it.
These context facts let you tailor proportionate controls and evidence, which is the framework’s intent rather than template completion.
Topic: Values, Behaviours, and Culture
A product team is under pressure to “use GenAI” to improve customer support response times. Three different groups are pushing different vendor tools, and adoption is stalling because stakeholders remember a recent automation initiative that delivered little value.
As the AIPGF practitioner, what is the best next governance action to avoid tool chasing and regain momentum?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: The problem is not a lack of tools; it is a lack of clear, shared objectives for AI assistance. An AI Assistance Plan anchors the effort on measurable outcomes, defines where AI is and is not used, and sets boundaries for human accountability. That clarity reduces resistance and stops wasted effort on competing tools.
When adoption stalls and teams argue over tools, the strongest control is to re-anchor the work on agreed outcomes and guardrails. In AIPGF terms, set clear objectives for AI assistance (what work it will help with, what “good” looks like, and what humans remain accountable for) before investing in vendor selection.
A practical AI Assistance Plan should capture:
Tool evaluation then becomes evidence-based against the agreed objectives, which helps rebuild trust and reduces resistance by showing purposeful, human-centric use of AI rather than experimentation for its own sake.
Clarifying outcome-based objectives and measurable success criteria first prevents tool chasing and provides a shared basis for adoption decisions.
Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Practitioner guide on PMExams.com, then return to PM Mastery for timed practice.