Try 40 free AIPGF Practitioner questions across the exam domains, with answers and explanations, then continue in PM Mastery.
This free full-length AIPGF Practitioner practice exam includes 40 original PM Mastery questions across the exam domains.
The questions are original PM Mastery practice questions aligned to the exam outline. They are not official exam questions and are not copied from any exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some exam sponsors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
For concept review before or after this set, use the AIPGF Practitioner guide on PMExams.com.
Set a 40-minute timer and answer the 40 questions as applied governance scenarios. Before opening explanations, decide which control, role, gate, evidence gap, or stakeholder action the scenario is really testing.
Suggested timing checkpoints:
| Question range | Target elapsed time |
|---|---|
| 1-10 | 10 minutes |
| 11-20 | 20 minutes |
| 21-30 | 30 minutes |
| 31-40 | 40 minutes |
| Item | Detail |
|---|---|
| Issuer | APMG International |
| Exam route | AIPGF Practitioner |
| Official exam name | APMG AI Project Governance Framework (AIPGF) Practitioner |
| Full-length set on this page | 40 questions |
| Exam time | 120 minutes |
| Topic areas represented | 8 |
| Topic | Approximate official weight | Questions used |
|---|---|---|
| Foundations of AI Project Governance (AIPGF) | 12% | 5 |
| AI in Projects and Organizations (Context) | 12% | 5 |
| Framework Structure and Controls | 13% | 5 |
| Roles, Responsibilities, and Accountabilities | 13% | 5 |
| Principles for Responsible and Trustworthy AI | 13% | 5 |
| Values, Behaviours, and Culture | 12% | 5 |
| Lifecycle Governance (Initiate to Operate) | 13% | 5 |
| Assurance, Metrics, and Continuous Improvement | 12% | 5 |
Topic: Values, Behaviours, and Culture
A project team is rolling out a GenAI assistant to help case workers draft responses to customer complaints. The pilot has strong executive sponsorship, and the approved tool is available, but governance is breaking down:
What is the most likely underlying source of resistance driving these behaviors?
Best answer: B
What this tests: Values, Behaviours, and Culture
Explanation: The consistent pattern is avoidance of formal review and traceability because reviewers fear being held responsible for AI-influenced decisions. In AIPGF terms, unclear decision rights and accountabilities undermine HITL, reduce transparency, and drive shadow AI usage as a self-protection behavior.
A common source of resistance to AI adoption is perceived personal risk: people are asked to “own” AI-assisted outputs without clear, supported decision rights (who approves what, what “good” looks like, and what protection/escalation exists). In the scenario, reviewers explicitly avoid having their name attached, which directly explains skipped HITL sign-off, weak evidence (empty logs/plans), and shadow AI use to bypass traceable governance. This is a culture-and-governance resistance pattern, not primarily a technology availability issue.
Effective mitigation is to clarify accountabilities and decision boundaries for HITL, define acceptance criteria and escalation paths, and reinforce Transparency (documented decisions/evidence) so staff can adopt the tool without feeling exposed.
When people feel exposed and lack clear decision rights, they avoid HITL sign-off and shift to shadow AI to reduce perceived blame.
Topic: Roles, Responsibilities, and Accountabilities
You are delivering a GenAI-assisted customer-support content project. A call-center stakeholder group has raised concerns about “AI talking to customers.” Your sponsor asks whether anything needs to change before the next release.
Exhibit: AI Assistance Plan (v0.8) — delivery excerpt
Use: GenAI drafts customer emails + knowledge-base articles
HITL: Agent reviews & approves before sending/publishing
Benefit target: -30% authoring time; consistent tone
Red flag observed (last 2 sprints): 3 factual errors reached QA;
1 near-miss email would have mis-stated refund policy
Disclosure: External comms currently do not mention AI assistance
Stakeholders: Call-center leads sensitive to "AI answering customers"
What is the best next governance action supported by the exhibit to maintain stakeholder trust during delivery?
Best answer: C
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The exhibit shows real delivery benefits alongside a near-miss and a current disclosure gap for external communications. To maintain trust, governance should proactively and clearly communicate where AI is used, what value it is providing, what went wrong or almost went wrong, and what HITL controls prevent customer impact. Capturing the agreed narrative and escalation path preserves accountability and auditability.
During delivery (Activation), trust is maintained by communicating AI usage in a balanced, stakeholder-appropriate way: benefits, limitations, and the controls that keep humans accountable for outcomes. Here, the near-miss and “no disclosure” line are red flags because stakeholders are already sensitive to AI in customer contexts.
A good governance response is to:
This is preferable to either hiding issues (erodes trust) or flooding stakeholders with overly technical artifacts that don’t address their core concerns.
This applies Transparency and Human-Centricity by proactively communicating both value and limitations (near-miss) plus the human oversight controls that manage risk and protect trust.
Topic: Foundations of AI Project Governance (AIPGF)
A retail bank based in the EU wants to use a GenAI-enabled service to pre-screen job applicants and produce a ranked shortlist for hiring managers. The project team has identified a vendor tool and drafted a high-level workflow, and leadership is pushing to start a pilot in 6 weeks.
As the AIPGF Practitioner, what is the best next step to align governance expectations where regulation (e.g., the EU AI Act) may apply?
Best answer: D
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: Because the use case is hiring-related in the EU, regulatory expectations may be triggered and must be addressed during Foundation before Activation begins. The next step is to classify the AI system’s risk and translate that into governance requirements (documentation, decision rights, and oversight such as HITL) and explicit go/no-go gates. This prevents a pilot from proceeding without prerequisite evidence and approvals.
In AIPGF, regulations influence governance expectations most directly at the point you define the governance approach for the use case. For an EU hiring shortlisting system, the appropriate next step in Foundation is to determine the likely regulatory risk classification (e.g., whether it is treated as “high-risk”) and then tailor governance accordingly before any pilot.
This typically means:
Starting a pilot or building monitoring first is premature if regulatory-driven prerequisites (evidence and oversight) have not been agreed and approved.
Risk classification drives whether enhanced evidence, HITL oversight, and formal approvals must be built into the governance plan before activation.
Topic: Values, Behaviours, and Culture
A retail bank is in the Activation stage for a GenAI assistant that drafts responses to customer complaints. The project is on a tight deadline, but Internal Audit has stated it will review the solution 6 months after go-live and expects clear evidence of who approved AI-related changes and decisions. Which action should the project team AVOID to uphold Accountability?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: Accountability in AIPGF requires clear ownership, decision rights, and documented approvals for AI-assisted deliverables and changes. With an upcoming audit, the team must be able to evidence who decided what, when, and on what basis. Relying on informal chats undermines auditability and blurs responsibility.
The core concept is Accountability: AI-related decisions and approvals must have a clearly identified owner and an evidence trail that supports review, assurance, and learning. In this scenario, the audit expectation makes documentation non-negotiable for items like prompt changes, model updates, risk acceptances, and go/no-go decisions.
Good accountability practices include:
Informal chat threads are not a controlled record and typically lack consistent approver identity, version linkage, and retention, so they weaken governance under Accountability and Transparency.
This creates no reliable audit trail or clear decision ownership for AI changes and approvals.
Topic: Lifecycle Governance (Initiate to Operate)
A hospital is initiating a GenAI assistant to draft discharge instructions. In the Foundation stage, the team is populating the AI risk register.
For the risk “unsafe or hallucinated advice reaches patients,” the team assigns the Clinical Safety Lead to define the response, ensure controls are implemented (e.g., HITL clinician review before release), track completion evidence, and escalate if the control is not effective.
In AIPGF terms, which role responsibility does this practice most directly map to?
Best answer: B
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Assigning a named individual to own a specific AI risk, define the response, ensure the HITL control is in place, and escalate when it is ineffective is the essence of risk ownership. In the Foundation stage, this makes the risk register actionable by linking each key AI risk to clear accountability and evidence of control operation.
The core governance concept here is assigning clear accountability for AI-related risks so that planned responses and controls are executed, evidenced, and escalated when needed. In AIPGF-aligned project risk management, the risk register is not just a list of risks; each significant AI risk should have a named owner with decision rights and an obligation to:
That set of responsibilities maps to the Risk Owner role, while other roles may support, coordinate, or approve but are not the accountable owner of the specific risk.
This is the accountability for a specific risk’s response plan and ensuring its controls are implemented and effective.
Topic: Lifecycle Governance (Initiate to Operate)
A project team plans to use a generative AI tool to draft requirements and test cases during initiation. Before any AI use begins, the PM creates an AI Assistance Plan (intended uses, data boundaries, approvals, and HITL checkpoints) and starts a clarification-questions log and an assumptions log to make the work auditable.
What AIPGF life cycle stage intent does this practice most directly support?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Creating an AI Assistance Plan and starting supporting logs are Stage 1 (Foundation) planning activities. They set decision rights, boundaries, and HITL expectations before AI-assisted outputs are produced. The logs provide traceability for later assurance and audit needs.
Stage 1 (Foundation) is where governance is set up so AI use is intentional, controlled, and auditable before execution begins. Producing an AI Assistance Plan clarifies what AI will be used for, what data can/can’t be used, required approvals, and where humans must review or decide (HITL). Supporting logs (clarification questions and assumptions) capture what is unknown, what is being presumed, and what needs stakeholder confirmation, creating traceability that later gates can rely on. Activation focuses on doing the work under those agreed controls, and Evaluation focuses on post-build/operational monitoring and learning. The key takeaway is that these artifacts are created up front to enable controlled delivery later.
These are Stage 1 deliverables used to define how AI will be used and to capture assumptions/queries for auditability before delivery starts.
Topic: Values, Behaviours, and Culture
A retail bank has rolled out a GenAI assistant to draft email responses for customer complaints. After 6 weeks, average handling time is down 25%, but quality checks show a small rise in inaccurate product statements and customer feedback notes a “cold” tone. Internal Audit requires evidence of ongoing oversight for AI-assisted communications. The product owner wants improvements without losing most of the efficiency gains.
What should the project team do next to best apply Continuous Improvement in line with AIPGF core values?
Best answer: C
What this tests: Values, Behaviours, and Culture
Explanation: Continuous Improvement in AIPGF means reviewing real usage outcomes and updating practices, not just accepting initial performance. A time-boxed review that feeds changes into the AI Assistance Plan (prompts, guidance, training, thresholds) and adds ongoing monitored HITL sampling balances human-centric quality, transparency/auditability, and delivery efficiency. This preserves most cycle-time benefits while reducing harm from inaccuracies and tone issues.
The core concept is a closed-loop governance improvement cycle: use operational evidence to refine how AI is used and controlled. Here, efficiency gains are real, but the observed harms (inaccurate statements and tone) require proportionate, documented adjustments rather than either “business as usual” or a shutdown.
A balanced Continuous Improvement action is to:
The key takeaway is to adapt practices based on measured outcomes while keeping governance evidence and human accountability in place.
It uses measured outcomes to refine prompts/training and strengthens proportionate controls with auditable monitoring while retaining benefits.
Topic: Lifecycle Governance (Initiate to Operate)
A contact-center team is in Stage 2 (Activation) piloting a GenAI assistant that drafts refund emails. Today, supervisors report several emails included customers’ full payment card numbers. You are the project manager.
Exhibit: AI incident record + plan excerpt
INC-07 (Open) Severity: High Potential: PCI exposure
Scope: 6 cases confirmed; unknown total in last 24h
Immediate control available: Toggle "AI draft" OFF (no system outage)
Decision rights: Service Owner may suspend; AI Gov Lead approves rollback
Comms: Notify Risk/Compliance within 4 hours; brief Contact Center lead
Evidence: Preserve prompts/outputs + config snapshot before changes
Based on the exhibit, what is the best next governance action?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: The exhibit indicates a High-severity incident with potential PCI exposure and an immediate containment control (toggle off) that can be executed without an outage. It also specifies evidence preservation before changes and explicit decision rights and notification timelines. The best action is to contain first, then escalate and communicate according to the plan.
In Activation, incident governance prioritizes rapid containment and controlled decision-making over optimization. Here, the incident is rated High with potential PCI exposure and unknown broader scope, and the plan explicitly provides an immediate control (toggle “AI draft” off). The same excerpt requires preserving evidence (prompts/outputs and configuration snapshot) before making corrective changes, and it defines who can suspend service versus who approves rollback plus time-bound notifications.
A defensible next step is:
Corrective actions (prompt/model changes) come after containment and evidence capture, and approvals follow the stated decision rights.
This follows the defined containment, evidence preservation, escalation, and time-bound communication controls for a High-severity incident.
Topic: Values, Behaviours, and Culture
A team is building a GenAI-enabled internal HR policy Q&A assistant (no automated decisions; answers are reviewed by HR before publishing). It is classified as Low risk and must launch in 6 weeks to replace an outdated intranet.
Exhibit: Gate checklist excerpt (last 2 cycles)
Gate: Activation go/no-go
Required approvers: HR, InfoSec, Legal, Data Protection, Comms,
IT Ops, Enterprise AI CoE, Model Risk, Internal Audit (9 total)
Evidence submitted: AI Assistance Plan v0.6, prompt log sample,
training data summary, HITL workflow diagram
Outcome: Deferred twice for “more detail” (no new risks logged)
Cycle time per gate review: 18 business days
Team response: Started using an unapproved chatbot “temporarily”
Based on the exhibit, what is the best next governance action?
Best answer: B
What this tests: Values, Behaviours, and Culture
Explanation: The exhibit shows clear symptoms of overly heavy governance: nine approvers, repeated deferrals without new risks, and an 18-day review cycle that is driving shadow AI use. A pragmatic AIPGF response is to right-size governance to the low-risk classification by simplifying decision rights and accelerating reviews. This maintains transparency and Human-in-the-Loop assurance while reducing friction.
This is a “too heavy” governance signal: the approval burden and review latency are disproportionate to the stated low-risk profile, and the side effect is non-compliant workarounds. In AIPGF’s pragmatic sensibility, governance should be risk-based and enable delivery, not incentivize shadow AI.
A proportionate adjustment is to:
The key takeaway is to tailor controls to context so transparency and accountability are preserved while unnecessary friction is removed.
The exhibit shows governance is too heavy for a low-risk use case (long cycle time, repeated deferrals, shadow use), so right-sizing controls restores speed without removing accountability.
Topic: AI in Projects and Organizations (Context)
A project team is piloting a GenAI assistant for customer service summaries. After UAT, the team finds the assistant occasionally invents policy details, creating potential customer harm. The Product Owner wants to proceed to limited go-live and “fix it in the next sprint.” Under the AIPGF decision-rights approach, which role should be the escalation point with authority to approve a risk acceptance or delay go-live?
Best answer: C
What this tests: AI in Projects and Organizations (Context)
Explanation: Because the issue involves potential customer harm, the decision is not a routine delivery tradeoff. AIPGF expects clear decision rights: the delivery team identifies and documents the risk, then escalates it to the designated AI governance authority to decide risk acceptance, controls, or a go/no-go. This supports transparency, accountability, and auditable governance decisions.
Decision authority in AIPGF is based on impact and risk, not just day-to-day delivery ownership. When an AI-assisted product shows a credible harm mode (e.g., hallucinated policy details), moving forward requires an explicit governance decision: accept the risk with compensating controls, delay release, or change the scope.
In practice, the Project Manager and Product Owner should:
This ensures the accountable body makes the risk-acceptance or go/no-go call with appropriate assurance input and traceability.
High-impact risk acceptance and go/no-go decisions should be escalated to the designated AI governance authority, not decided within the delivery team.
Topic: Principles for Responsible and Trustworthy AI
A retail bank wants to use a GenAI assistant during the Activation stage to help underwriters write credit-decline rationales. The assistant would see customer application details and propose a rationale paragraph.
A recent internal audit finding requires that, for every decline, the bank can reconstruct who decided what, using which inputs and model version, and show the human approver’s rationale for accepting or editing AI output.
Which AIPGF governance action best balances ethical, efficient, and effective use given this constraint?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: The dominant constraint is strict auditability: the bank must be able to reproduce the decision path and demonstrate accountable human approval. An AI Assistance Plan plus an auditable decision log (including model/version, prompts/inputs, outputs, and human edits/approval) enables transparent, defensible use while still allowing efficiency gains from drafting support.
When auditability is non-negotiable, the best tradeoff is to keep GenAI use within a controlled, reconstructable process rather than optimizing only speed or model performance. In AIPGF terms, this operationalizes Transparency and Human-Centricity by making AI assistance visible, reviewable, and attributable to an accountable human decision-maker.
A practical control set is:
This meets the audit finding while still enabling effective drafting support; other controls (privacy or bias testing) may still be needed, but they do not satisfy the primary auditability requirement on their own.
Strict auditability is the key discriminator, so traceability evidence (provenance + human accountability) must be built into the workflow before use.
Topic: Framework Structure and Controls
A product team is delivering a GenAI-assisted claims correspondence solution for a regulated insurer. To move fast, the project manager tells the team: “Use GenAI wherever it helps,” but does not define what AI is allowed to do (e.g., draft vs approve), where it may be used (internal notes vs customer letters), or who is accountable for AI-assisted outputs. At the next governance checkpoint, Internal Audit asks for the agreed scope of AI assistance.
What is the most likely near-term impact of this omission?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: Not setting governance objectives and scope for AI assistance immediately weakens transparency and auditability: the project cannot show what AI was permitted to do, where it was used, or who signed off AI-assisted deliverables. In a regulated context, that typically blocks assurance at a gate even before broader performance or outcome issues emerge.
AIPGF expects an explicit definition of AI assistance objectives and scope so decision rights, controls, and evidence can be applied consistently. In this scenario, the team used GenAI broadly without stating boundaries (what tasks, which artifacts, and what level of autonomy) and without assigning accountability for AI-assisted outputs. When Internal Audit asks for the agreed scope, the project has little defensible evidence, so the near-term consequence is a governance delay or pause while the AI Assistance Plan (or equivalent) is created and approvals are obtained.
Practical scope elements include:
The key takeaway is that unclear AI scope primarily hurts near-term trust and audit readiness, not long-term model behaviour or downstream enforcement actions.
Without explicit AI assistance scope and accountabilities, the team cannot evidence controlled, approved use, so assurance sign-off is likely to stall.
Topic: Foundations of AI Project Governance (AIPGF)
A retail bank is running a 10-week project to pilot a GenAI assistant that drafts responses for call-center agents (agents must approve before sending). The output is customer-facing and the bank’s enterprise AI policy requires: (1) registering AI use cases in a central inventory, (2) completing a risk tiering assessment, and (3) obtaining enterprise AI governance approval for high-risk deployments (the enterprise board meets monthly). The project already has standard project governance (steering committee, RAID log, change control).
Which approach best balances ethical, efficient, and effective AI use with required governance controls?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The best balance is to apply AI-specific controls at the project level (so the team can build safely and quickly) while interfacing with enterprise AI governance for enterprise-wide requirements like inventory registration, risk tiering, and mandated approvals. This distinguishes project AI governance (how this project uses AI responsibly) from enterprise governance (portfolio-wide policy and decision rights) and from general project governance (time/cost/scope control).
Project-level AI governance focuses on how the team will use AI safely and effectively in this specific delivery: defining intended use, setting HITL boundaries, capturing AI-related decisions, and producing test/monitoring evidence for go/no-go gates. Enterprise AI governance sets organization-wide controls such as mandatory inventory registration, risk tiering, standard patterns, and escalation/approval for higher-risk deployments. General project governance still runs delivery cadence (steering committee, RAID, change control) but does not replace AI-specific assurance.
A balanced approach here is to:
This avoids both under-governing (trust and harm risks) and over-centralizing (unnecessary delays).
This keeps day-to-day AI controls and evidence within the project while meeting enterprise requirements for inventory, risk tiering, and only the mandated approval gate.
Topic: Foundations of AI Project Governance (AIPGF)
Your organization has adopted high-level AI standards (e.g., “follow recognized AI regulations and be transparent and fair”), but a project team is building a GenAI assistant to draft customer-service emails.
Constraints:
What is the BEST next governance action to close the gap between the standards and day-to-day execution?
Best answer: A
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The gap is that standards describe what “good” looks like, but projects need explicit, testable controls, decision rights, and evidence to execute and demonstrate compliance. In Foundation, the best next action is to translate the standards into a tailored set of controls and artifacts (e.g., AI Assistance Plan, RACI, logs, gate criteria) aligned to HITL and audit needs before Activation begins.
High-level standards and regulations are typically principles-based (what outcomes are required), while day-to-day delivery needs operational mechanisms (how the team will implement, verify, and evidence those outcomes). In this scenario, the project must be ready for Activation with HITL and must produce auditable evidence within 6 weeks, so Foundation work should convert “transparent/fair/responsible” into concrete governance controls with clear accountability.
A practical way to close the gap is to define and baseline a tailored control-and-evidence set, such as:
Deferring this translation or relying on self-attestation leaves the team unable to execute consistently or satisfy auditability.
It operationalizes principles-based standards into assigned controls, required evidence, and stage gates that can be executed and audited.
Topic: Foundations of AI Project Governance (AIPGF)
A project team is delivering a GenAI assistant to help account managers write client update emails. After a 2-week pilot, the sponsor wants the assistant to auto-send emails with no human review to “save time.”
Exhibit: AI Assistance Plan excerpt (v0.9)
Use case (current): Draft emails; human edits before sending
Planned evolution: Auto-send after pilot if “quality is OK”
Risk tier (Foundation): Medium (approved)
HITL control: Mandatory reviewer during pilot only
Transparency: No disclosure to recipients planned
Decision note: “Controls agreed at Foundation; no re-approval expected”
Which next governance action is best supported by the exhibit, given AIPGF expectations about decisions made once vs revisited iteratively as usage evolves?
Best answer: D
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: Shifting from human-edited drafting to auto-sending is a material change in how AI outputs are used, so key governance decisions cannot be treated as “set once.” Under AIPGF, the project should revisit and re-approve the risk tier and the associated control set (including HITL and transparency) when usage expands. Updating the AI Assistance Plan and decision log creates auditability and a defensible go/no-go.
The core idea is to baseline some governance decisions at Foundation, but to revisit other decisions iteratively when the AI usage pattern, audience, or decision impact changes. In the exhibit, the planned evolution to auto-send increases autonomy and stakeholder impact, making the “no re-approval expected” note inappropriate.
A practical AIPGF approach is:
Here, the project should treat auto-send as a new governance checkpoint in Activation/Evaluation, updating the AI Assistance Plan and recording refreshed approvals before changing operational use.
Moving from assisted drafting to autonomous sending changes usage and requires iterative revalidation of risk tier, HITL, and transparency decisions before go/no-go.
Topic: Roles, Responsibilities, and Accountabilities
A national insurer is piloting a GenAI assistant to draft outbound claim decision letters for adjusters. The model is provided by a vendor, but the insurer must meet strict auditability expectations (regulators may request who approved release, on what evidence, and when). You are building a RACI for the Activation governance gate: “Go/no-go to release to production with HITL.”
Which responsibility assignment is most appropriate for this gate decision and its auditable record?
Best answer: C
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: For a strictly auditable go/no-go gate, AIPGF expects clear decision rights and an evidence-backed approval record owned by a governance authority. Delivery roles should be responsible for assembling the gate pack and updating the decision log, while independent control functions are consulted to strengthen assurance. Internal Audit is typically informed so it can later provide independent assurance without owning the release decision.
The core need is strict auditability at an Activation gate, which requires unambiguous decision rights and a traceable approval trail (who approved, based on what evidence, when). In AIPGF terms, the go/no-go should be accountable to a defined governance authority (e.g., AI Governance Board chair or delegated gate owner), while the delivery function is responsible for preparing the evidence pack (AI Assistance Plan, risk log updates, HITL procedures, testing results) and recording the decision in the decision log.
Control functions (Legal/Compliance, Data Protection, InfoSec) are best positioned as consulted to provide challenge and assurance inputs prior to approval. Internal Audit should generally be informed rather than owning the gate, preserving its independence as third-line assurance.
The closest trap is making the delivery lead or vendor accountable, which weakens independence and defensibility under audit.
It places decision accountability with a defined governance authority while making delivery compile evidence, consulting control functions, and informing (not performing) audit.
Topic: Principles for Responsible and Trustworthy AI
A retail bank pilots a GenAI assistant to draft credit-limit increase decisions and customer emails. To hit a launch deadline, the project lead removes the required human sign-off step and tells the team to “treat the AI output as the decision unless someone objects.” No role is named as accountable for the final decision, and no decision log is maintained.
Under the Human-Centricity principle, what is the most likely near-term impact of this governance omission?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Human-Centricity requires humans remain accountable for AI-assisted deliverables and decisions. Removing human sign-off, failing to assign an accountable decision owner, and not keeping decision records breaks traceability. The most immediate consequence is weakened auditability and reduced ability to respond to challenges, complaints, or assurance requests with clear decision ownership.
Human-Centricity in AIPGF keeps decision accountability with humans even when AI supports the work. In the scenario, the project effectively transfers decision-making to the GenAI assistant by default (“AI output as the decision”) and removes evidence of human ownership by not naming an accountable role or maintaining a decision log.
Near-term, this creates an assurance gap: when an outcome is challenged (by a customer, risk, compliance, or internal audit), the team cannot demonstrate who approved the decision, what inputs were used, or what rationale was applied. That undermines trust and increases exposure because the organization cannot defend or quickly remediate specific decisions. The closest distractors describe issues that are plausible but not the most direct or immediate consequence of missing human accountability.
Without a named accountable human and decision records, the bank cannot show who made/approved outcomes, undermining near-term assurance and challenge handling.
Topic: Framework Structure and Controls
During Activation, a bank project uses GenAI to draft customer-facing hardship policy letters. The team did not implement traceability controls: no prompt/response logging, no recording of external sources used, no versioning of AI-generated drafts, and no decision rationale for human edits/approvals.
Two weeks before release, Internal Audit asks for evidence showing how the final letter content was produced and approved to support the next governance gate.
What is the most likely near-term impact of this omission?
Best answer: C
What this tests: Framework Structure and Controls
Explanation: Traceability controls are what make AI-assisted work auditable: they show what the AI produced, what sources were used, which versions were approved, and why humans accepted changes. With an audit request two weeks before release, the immediate consequence is a failure to provide objective evidence for the governance gate. That typically triggers gate delay and rework to reconstruct the record.
In AIPGF, traceability controls (prompt/response logs, source recording, versioning, and decision rationale) provide auditability and support Transparency by making AI assistance inspectable and defensible. In this scenario, the omission is discovered just before a governance gate, and Internal Audit requests evidence of provenance and approvals for customer-facing content.
Near-term, the team cannot reliably answer:
That evidence gap increases risk exposure immediately by blocking assurance activities, forcing rework to recreate documentation, and likely delaying the go/no-go decision. Longer-term consequences may occur, but the immediate impact is loss of auditability at the gate.
Without prompt/source/version/rationale records, the team cannot demonstrate how content was generated and validated, so audit support for the gate fails in the near term.
Topic: Assurance, Metrics, and Continuous Improvement
You have joined an AIPGF continuous-improvement workstream for a GenAI-enabled claims triage service. The team says, “We’ve made big governance improvements since last quarter,” and asks you to report progress to the assurance committee. You are given a high-level improvement roadmap but no prior assessment pack or evidence set.
What should you verify or request FIRST so progress can be tracked credibly using repeat assessments and evidence?
Best answer: B
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: Tracking improvement requires a repeatable measurement approach: a baseline, consistent assessment criteria, and verifiable evidence that can be sampled again later. Requesting the prior assessment pack and defining the evidence set enables an apples-to-apples comparison over time. This turns a narrative claim of “improvement” into auditable progress reporting.
The core concept is evidence-based continuous improvement: progress is demonstrated by comparing repeat assessments performed the same way, against the same criteria, using documented evidence. In this scenario, you have a roadmap but lack the measurement backbone needed to show movement from “then” to “now.”
Before reporting to an assurance committee, establish the minimum comparable assessment set:
Once these are in place, later assessments can show whether roadmap actions produced measurable, evidenced improvements rather than subjective statements or technical detail that doesn’t prove governance outcomes.
Without a defined baseline, consistent assessment method, and agreed evidence sources, you cannot validly compare results across repeat assessments.
Topic: Values, Behaviours, and Culture
A customer-support division is piloting a GenAI email-drafting assistant. To maintain momentum, the sponsor wants to expand from 40 to 400 users next month.
However, the change lead reports resistance: some agents are skipping the required “human review before send” step to save time. The project must demonstrate that the control is working in practice before scaling.
Which artifact/evidence best validates readiness to scale while preserving controls?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: Readiness to scale depends on evidence that the required HITL control is effective in day-to-day operation, not just planned. A decision log that captures actual reviewer sign-offs, overrides, and how exceptions were handled supports transparency, auditability, and a confident go/no-go decision. This preserves controls while enabling a faster, defensible rollout.
In AIPGF-aligned change management, maintaining momentum does not mean relaxing controls; it means scaling only when you can demonstrate that controls work in real workflows. When there is resistance leading to control-bypass, the most convincing readiness evidence is an operational record that shows adherence, deviations, and remediation.
A fit-for-purpose decision log should capture, at minimum:
Benefits measures, role maps, and risk lists are useful, but they do not validate that people are actually using the control correctly under time pressure. The key takeaway is to scale based on observed control performance, not intentions or outcomes alone.
It provides auditable evidence that the HITL review control is being followed (or corrected) under real use, supporting a controlled scale-up.
Topic: Roles, Responsibilities, and Accountabilities
A team in the Activation stage rolls out a GenAI assistant to draft customer-service replies. Three weeks later, an internal review finds: no documented HITL sign-off for when agents can send AI-drafted replies; agents in two regions used personal AI tools “to keep up”; decision logs and evidence for AI-usage decisions are inconsistent; measured handling-time benefits are below target; and the workers council is pushing back, saying “this change was never properly explained.”
What is the most likely underlying cause?
Best answer: A
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The pattern points to a communications and engagement breakdown: unclear, unevenly communicated AI usage decisions and change impacts. Without bringing the right stakeholders (frontline leaders, assurance functions, workforce reps) into planned meetings and communications, HITL approvals and approved-tool guidance are missed, and adoption fragments into shadow AI use. That also undermines evidence consistency and benefits realization.
In AIPGF, AI usage decisions (what is allowed, who approves, what HITL checks apply, and how changes are communicated) need deliberate stakeholder engagement. Here, multiple regions acted differently, HITL sign-off is missing, and a workforce body says the change was not explained—these are strong clues that the engagement approach did not identify and actively manage key stakeholder groups and decision owners.
A fit-for-purpose approach would include:
The core issue is not “more paperwork,” but missing alignment and communication pathways that make correct behavior the easiest behavior.
Key affected groups and decision-right holders were not engaged with clear, timely communications on approved AI use and HITL approvals, driving shadow usage and missed sign-offs.
Topic: Assurance, Metrics, and Continuous Improvement
A retail bank has completed an AIPG-CMM maturity assessment for a GenAI “agent-assist” tool used by call-centre staff to draft customer responses. The assessment shows strong documentation in Foundation/Activation, but weak continuous improvement practices in Evaluation.
AIPG-CMM highlights (excerpt)
- Monitoring of AI outputs: ad hoc, not role-owned
- Incident capture/triage: informal, no thresholds
- Benefits tracking: defined metrics, inconsistent review cadence
The sponsor asks you to propose the next improvement actions for the next quarter. Before you select specific actions, what should you ask/verify FIRST?
Best answer: A
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: A maturity assessment tells you where capability is weak, but not how much governance is warranted. Verifying the use case’s risk tier and the decision scope the GenAI output can influence lets you size the next-step improvements (e.g., monitoring ownership, incident thresholds, escalation paths) appropriately and defensibly.
Next-step improvement actions from an AIPG-CMM assessment should be tailored to the context, especially the risk tier and the decision authority/scope of the AI assistance. In the scenario, Evaluation practices are weak (ad hoc monitoring, informal incident handling), but the required improvement level depends on how consequential the AI-assisted outcomes are.
Ask first for the information that will shape the improvement plan’s “how much” and “how fast,” such as:
Once that is clear, you can define proportionate actions (named monitoring owner, thresholds, incident workflow, review cadence, and evidence) that match the assessed gaps. The key takeaway is that maturity gaps plus risk context drive the right improvement backlog.
Risk tier and decision scope determine the proportional Evaluation-stage improvements (monitoring, thresholds, escalation, and approvals) needed from the maturity gaps.
Topic: Roles, Responsibilities, and Accountabilities
A bank is starting an AI-assisted project that drafts credit-limit change recommendations for call-center agents using a third-party GenAI API. The project is in Foundation and must pass an Activation gate in 2 weeks.
Constraints:
What is the BEST next governance action to support stage-appropriate accountability across the AIPGF life cycle?
Best answer: B
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: In Foundation, the project must establish clear accountability and decision rights before moving to Activation, especially for a high-risk, customer-impacting, HITL use case. Building a RACI across Foundation, Activation, and Evaluation ensures the right roles are responsible, accountable, consulted, and informed for key governance activities and gate approvals. This also produces audit-ready evidence within the stated timelines.
The core concept is creating a responsibility matrix (RACI) that assigns decision rights and accountability for AI governance activities across the AIPGF life cycle. Because the project is high risk, requires HITL, and faces an imminent Activation gate plus an audit review, informal role agreements are insufficient.
A practical next step is to define and obtain sign-off on a RACI that covers, at minimum:
This aligns with Human-Centricity and Transparency by making accountabilities explicit and auditable; it also supports Adaptability by clarifying who can approve changes as the solution evolves.
A signed-off, stage-spanning RACI clarifies decision rights and audit-ready accountabilities before the Activation gate in a high-risk HITL use case.
Topic: AI in Projects and Organizations (Context)
A HR shared-services program is using a GenAI assistant to draft redundancy letters. The project is in Activation and plans to scale from a 20-case pilot to 2,000 employees next month. During pilot QA, 3/20 drafts included incorrect protected-attribute inferences and 2/20 cited the wrong policy clause; the team cannot explain why (no prompt/version logs), and Legal requires an auditable rationale for each decision.
What should the project lead do next?
Best answer: A
What this tests: AI in Projects and Organizations (Context)
Explanation: Because the pilot shows material harmful errors and the team cannot provide traceable evidence (no logs) to support decisions, scaling would violate governance expectations and undermine Transparency and Human-Centricity. The balanced action is to halt scale-up, redesign the AI-assisted approach with HITL controls, and pass an Evaluation go/no-go using documented assurance evidence before resuming.
This scenario signals unacceptable risk and lack of assurance: harmful content was produced, root cause cannot be established, and auditability requirements are unmet. Under AIPGF, the right response is to stop the planned scale-up and move into an Evaluation-style assurance package before expanding use.
A proportionate redesign typically includes:
This preserves value while restoring trust and control, rather than optimizing speed at the expense of governance.
The current lack of assurance and auditability, plus observed harmful errors, requires a stop/redesign and an Evaluation gate with logged, reviewable evidence before scaling.
Topic: AI in Projects and Organizations (Context)
A project team piloted a GenAI assistant to draft internal HR guidance. In testing, reviewers found (1) fabricated policy citations (hallucinations), (2) inconsistent tone across regions that could disadvantage non-native writers (bias), (3) missed requirements when prompts lacked context (context gaps), and (4) different outputs after the vendor rolled a silent model update (model change). The sponsor wants to move from pilot to broader rollout next month and asks for a single artifact that best validates readiness and control effectiveness for these uncertainty/error sources. Which artifact is most appropriate?
Best answer: D
What this tests: AI in Projects and Organizations (Context)
Explanation: The strongest readiness evidence is the artifact that demonstrates concrete controls mapped to the known AI uncertainty/error modes. An AI Assistance Plan can show how outputs will be validated (HITL), how context will be captured, how bias will be checked, and how model updates will be governed and monitored before broad rollout.
This scenario contains multiple common AI uncertainty/error sources: hallucinations (fabricated citations), bias (systematic output differences affecting groups), context gaps (missing constraints in prompts), and model changes (output drift after an update). Governance mitigates these by defining repeatable controls and decision rights for AI use.
An AI Assistance Plan is the best validation artifact because it can provide implementable evidence that controls exist and are ready to operate, such as:
By contrast, artifacts that only measure benefits, assign roles, or list risks do not validate that the controls will actually prevent/detect issues at rollout.
It directly evidences the specific controls that mitigate hallucination, bias, context gaps, and model drift and how they will be executed and monitored at rollout.
Topic: Assurance, Metrics, and Continuous Improvement
A shared-services team supports 30 AI-assisted projects using the same GenAI writing tool. The vendor releases a new feature that can pull in external web content, and a new internal policy is issued on using third-party data sources.
The AI governance lead introduces a “governance trigger” process: whenever a tool adds material capabilities or rules change, each project must update its AI Assistance Plan and risk controls, and re-confirm go/no-go criteria before continued use.
Which AIPGF concept is this practice primarily demonstrating?
Best answer: A
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: The described trigger-based review ensures governance stays fit-for-purpose as AI capabilities, risks, and policy requirements change. That is the essence of the AIPGF Adaptability principle: monitoring for change and rapidly tailoring controls, evidence, and approval conditions to the new context. It is especially important when scaling across many projects using shared tools.
In AIPGF, Adaptability is about keeping governance proportionate and current as AI tools, features, risks, and regulatory or policy expectations evolve. In the scenario, the governance lead establishes a repeatable mechanism (“governance triggers”) that forces projects to refresh key artifacts (like the AI Assistance Plan), reassess risks and controls, and revalidate go/no-go criteria when a material change occurs (new web access capability; new policy). This is continuous improvement at scale: systematic detection of change plus a defined response that updates evidence and decision rights.
Key takeaway: Adaptability is not a one-time setup; it is an ongoing, trigger-driven recalibration of governance as conditions change.
It institutionalizes change-triggered updates to governance artifacts and controls as tools and rules evolve.
Topic: Foundations of AI Project Governance (AIPGF)
A project team in a regulated retail bank is using GenAI to draft customer service emails. Delivery is slipping, and the team complains that “governance is just bureaucracy.” You review their current control approach.
Exhibit: Team artifact excerpt
GenAI Governance Checklist v0.3 (team-created)
- 12 separate sign-offs required before any prompt use
- No `AI Assistance Plan` captured; “too slow”
- No owner for final email content; “model output accepted”
- No `decision log` / traceability for changes
- KPI tracked: “approvals completed” only
What is the best next governance action supported by the exhibit?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The exhibit shows governance optimized for activity (sign-offs) rather than outcomes (accountability, control, traceability). The most effective response is to tailor governance so it is risk-based and lightweight, while still producing core assurance evidence. This keeps delivery moving and improves trust in the AI-assisted outputs.
Structured governance is not “more steps”; it is the minimum set of controls and evidence needed to enable safe, accountable delivery. Here, the checklist creates delay (12 sign-offs) yet fails key outcomes: no accountable human owner for customer communications, no AI Assistance Plan defining permitted use and constraints, and no decision log to support auditability and learning.
A better, scalable approach is to:
AI Assistance Plan, decision log, risk log).This replaces bureaucracy (tracking approvals) with evidence of controlled AI use, trust, and accountability.
It removes approval-heavy bureaucracy while restoring outcomes-focused controls: accountable human ownership, controlled AI use, and traceability.
Topic: AI in Projects and Organizations (Context)
A project team wants to use a GenAI tool to draft new contract clauses by referencing a library of past supplier contracts. Legal raises concerns about copyright/IP reuse and the need to retain AI prompts/outputs as project records. The PM is asked to produce an AI Assistance Plan that confirms content/licensing checks, approved data sources, and a records-retention approach before anyone uses GenAI on the work.
In the AIPGF, this practice most directly maps to the intent of which life cycle stage?
Best answer: A
What this tests: AI in Projects and Organizations (Context)
Explanation: Creating an AI Assistance Plan to confirm lawful data/content use and records retention is a front-loaded governance control. It is used to set constraints, decision rights, and evidence expectations before AI-assisted work starts, reducing privacy/IP/retention exposure. That aligns to the Foundation stage intent in the AIPGF life cycle.
The core concept is placing legal/compliance controls at the right governance point. In AIPGF, the Foundation stage is where the project defines the conditions for safe, compliant AI use before Activation work proceeds. For legal/compliance challenges like IP/copyright reuse and records retention, the governance response is to document and approve:
Activation is where the team executes against those controls (e.g., running reviews and maintaining logs), and Evaluation is where outcomes and operational monitoring are assessed. Transparency is a guiding principle but not the life cycle intent being tested here; the scenario is about establishing pre-use compliance guardrails and auditability.
It establishes upfront legal/compliance guardrails (IP and records retention) and approval conditions before AI use begins.
Topic: Principles for Responsible and Trustworthy AI
A retail bank wants to use a GenAI assistant to draft credit limit recommendations for small-business customers. The recommendations would be used in regulated lending decisions (high-impact), and an internal audit is scheduled in 3 months. The product owner is pushing for an 8-week MVP to meet a competitor launch.
As the AIPGF practitioner, what approach best applies the Adaptability principle by increasing governance rigor appropriately while still delivering value quickly?
Best answer: A
What this tests: Principles for Responsible and Trustworthy AI
Explanation: In a high-impact, regulated use case, Adaptability means scaling governance up, not bypassing it. A risk-tiered, phased release can meet an 8-week target while adding stricter gates, independent review, and audit-ready evidence (HITL decision controls, logs, and monitoring). This balances ethical use, effectiveness, and delivery speed under the stated audit and regulatory constraints.
The LO is about tailoring governance rigor to context: regulated, high-impact lending decisions require more evidence, tighter go/no-go gates, and independent assurance than low-risk experimentation. Applying Adaptability here means structuring delivery so value can be realized quickly without reducing necessary safeguards.
A balanced approach is to:
This enables controlled activation (e.g., limited scope/user group) while meeting audit expectations, rather than shipping first and “fixing governance later” or stopping all progress.
In a regulated, high-impact context it adds evidence and independent assurance at gated points while allowing controlled value delivery via phased release with HITL and audit-ready logging.
Topic: Assurance, Metrics, and Continuous Improvement
A retail bank is preparing to scale a GenAI call-summarisation service from one pilot team to multiple business units. Internal audit asks for an AIPG-CMM maturity baseline.
In the draft self-assessment, four level descriptions are provided below. Which description is INCORRECT?
Best answer: D
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: In AIPG-CMM, Optimised represents a mature, metrics-led capability with continuous improvement and proactive tuning of controls. A description that lacks KPIs and relies on post-incident fixes reflects low maturity rather than Optimised. The other descriptions align to early-to-mid maturity characteristics (Ad Hoc, Initialised, Standardised).
The AIPG-CMM levels describe how consistently and effectively AI project governance is performed and improved.
A description of Optimised that is reactive and lacks KPIs contradicts the core idea of continuous improvement.
Optimised maturity is metrics-driven and continuously improves controls, not reactive without KPIs.
Topic: Principles for Responsible and Trustworthy AI
A claims operations team is using a GenAI assistant to draft insurance claim decision letters. The project is rated High risk because incorrect denials could cause significant customer harm and complaints. Foundation activities are complete, but the AI Assistance Plan currently says only “human review as needed.” The team is about to begin Activation and wants to auto-send letters to meet a new SLA.
What is the best next step in the AIPGF stage flow?
Best answer: A
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Before Activation can safely operationalise AI-assisted outputs, Human-Centricity requires clear human accountability and control for high-impact decisions. For a high-risk use case like claim denials, “human review as needed” is not a sufficient control. The next step is to specify where humans must review/approve, what escalation applies, and who owns the final decision, and capture this in the AI Assistance Plan.
This scenario is about placing HITL checkpoints proportionate to criticality and risk, a core application of the Human-Centricity principle. Because incorrect denials can materially harm customers, the governance control must be explicit before any move to auto-send.
Appropriate next-step actions in Activation are to:
Monitoring and later evaluation are important, but they do not replace upfront definition of human approval points for high-impact outcomes.
High-risk, customer-impacting decisions require explicit, agreed HITL checkpoints and accountable approvals before operationalising automation.
Topic: Assurance, Metrics, and Continuous Improvement
A public-sector PMO has 12 active projects using GenAI for drafting policy briefs and customer communications. An internal audit has asked for a defensible baseline of “current governance capability” across the programme before approving further scale-up.
You have been asked to use the AIPG-CMM assessment instrument to capture the current state and feed an improvement roadmap. The programme is currently in the Foundation stage and has only partial evidence (some AI Assistance Plans exist, decision logs are inconsistent).
What is the best next step?
Best answer: C
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: Before you can prioritise improvements or justify scale-up, you need a consistent, evidence-based view of current capability across projects. In the Foundation stage, using the AIPG-CMM instrument to scope, collect evidence, and baseline maturity provides the defensible “as-is” position the audit is requesting. That baseline then becomes the input to a targeted improvement roadmap and later Activation/Evaluation controls.
The core concept is establishing an evidence-based baseline using an AIPG-CMM assessment instrument before making governance investment decisions. In the Foundation stage, the correct sequence is to define the assessment scope (projects/programmes, domains, sampling), collect objective evidence (e.g., AI Assistance Plans, decision logs, risk logs, approval records), and score/aggregate results to describe the current state consistently across the programme. This produces an auditable snapshot of capability and highlights systemic gaps (for example, inconsistent decision logging) that can be translated into a prioritised improvement roadmap and stage-gate criteria for Activation.
Key takeaway: you cannot credibly roadmap, scale, or monitor what you have not first assessed and baselined.
AIPG-CMM should be used now to gather evidence and score current capability consistently before planning improvements or approving scale-up.
Topic: Framework Structure and Controls
A service desk wants to use GenAI to draft replies to customers using past ticket text (may contain PII). The team has agreed AIPGF Principles (Human-Centricity, Transparency, Adaptability), defined the aim (reduce response time), and listed proposed activities (prompt library, data minimisation, human review). However, these are not yet consolidated into a governed deliverable, and decision rights for approving AI-assisted responses are unclear.
What is the best next step in the AIPGF stage flow?
Best answer: A
What this tests: Framework Structure and Controls
Explanation: Before any Activation use with customer data, AIPGF expects a governed Foundation deliverable that connects principles and aims to concrete behaviours, activities, and controls with clear accountabilities. An approved AI Assistance Plan operationalises Human-in-the-Loop decision points, transparency expectations, and evidence capture (e.g., logs). That creates a defensible go/no-go basis to proceed to piloting and later monitoring/evaluation.
The core concept is that AIPGF elements relate in a traceable chain: Principles and Core Values inform Core Behaviours, which drive the Activities you perform and the Deliverables you produce for governance and assurance. In this scenario, the team has principles, an aim, and a list of activities, but lacks the key Foundation deliverable that makes them actionable and auditable.
The best next step is to complete and obtain approval for the AI Assistance Plan, ensuring it:
Only then should the project move into Activation activities like piloting and, later, Evaluation activities like benefits reviews and continuous improvement.
This completes the Foundation deliverable that maps principles/aims into behaviours, activities, and accountable controls before Activation use of customer data.
Topic: AI in Projects and Organizations (Context)
A project team is using a general-purpose GenAI tool to draft responses for a customer support portal. Drafts are created by pasting recent ticket text (often contains PII) and the portal will later use the same model to suggest replies from user-entered text. Internal audit flags risks of data leakage and prompt injection, but the sponsor insists on keeping delivery speed. Under AIPGF, what is the best next action to balance value, speed, and required security governance?
Best answer: A
What this tests: AI in Projects and Organizations (Context)
Explanation: The team’s current approach exposes sensitive data and creates an attack path through untrusted user input. The best balance is to keep AI-enabled productivity but move to an approved, controlled environment and implement concrete safeguards (data minimization, access control, logging, and prompt-injection defenses) with clear accountabilities, aligning to Transparency and Human-Centricity.
This scenario contains two primary AI-introduced threats: (1) data leakage from pasting ticket text that includes PII into a general-purpose tool, and (2) prompt injection/model misuse because the portal will feed untrusted user input into AI-assisted response generation. In AIPGF terms (typically during Activation), the governance response should be risk-proportionate and auditable while maintaining delivery momentum.
A balanced control set includes:
This approach is more effective than policy reminders or vendor default assurances because it directly reduces the attack surface and supports assurance.
It preserves delivery pace while adding proportionate controls to prevent data leakage and reduce prompt-injection/model-misuse risk with auditable accountability.
Topic: Principles for Responsible and Trustworthy AI
A project team used a GenAI assistant to draft a credit policy change recommendation that will be approved by the bank’s Credit Committee. The sponsor wants to submit it today to meet a board pack deadline.
Exhibit: Decision log excerpt
Decision: D-07 Recommend lowering SME approval threshold
AI use: GenAI drafted analysis + pros/cons
Inputs: “last quarter pack” + staff notes (not attached)
Traceability: prompt/output not stored; citations missing
Control: single reviewer; no second-line review
Impact: affects automated approvals for ~18,000 customers
What is the best next governance action, based on the exhibit?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: The exhibit shows a material, customer-impacting decision with missing traceability (no stored prompts/outputs and no citations) and weak review controls. Under AIPGF, transparency requirements for key decisions override schedule convenience, because the approving body must be able to see and audit the basis for the recommendation before relying on it. The appropriate action is to stop and assemble auditable evidence and accountability for the AI-assisted work.
This is a key governance moment where the Transparency principle overrides convenience. The decision affects automated approvals for many customers, yet the record shows the AI-assisted analysis is not auditable (no prompt/output retention, missing citations, and unverified inputs) and assurance is weak (only a single reviewer and no second-line review). Before the Credit Committee can make a defensible decision, the team should create an evidence trail and decision accountability for AI-assisted content.
Practical minimum evidence to restore transparency includes:
Disclosing AI use is helpful but does not replace auditability when the decision is material.
This is a high-impact decision, so transparency and auditability must be created before the committee relies on AI-assisted analysis.
Topic: Framework Structure and Controls
A retail bank is preparing to move a GenAI assistant for drafting customer email replies from pilot into broader use next month. An internal audit reviewer asks how the bank will ensure staff use AI responsibly and consistently.
Exhibit: AI Assistance Plan (excerpt)
Stage: Activation readiness (draft)
Use: GenAI drafts replies; agent edits and sends (HITL)
Risk tier: High (customer comms; regulated)
Guidelines: "In progress" (not yet published)
Training: Optional 45-min lunch-and-learn; attendance not tracked
Competency check: None defined
Access: Any agent with SSO can enable the tool
Based on the exhibit, what is the best next governance action to define training and enablement controls before go-live?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: The exhibit shows a high-risk, regulated use case with unpublished guidelines, optional untracked training, and no competency check while access is open to all agents. AIPGF training and enablement controls should be proportionate to risk and embedded as go/no-go criteria at the Activation gate. The most defensible action is to require role-based training and verify competency before granting access and going live.
Training and enablement controls in AIPGF are how you make “responsible use” operational: clear guidance plus evidence that users understand and can apply it. Here, the risk tier is high and access is broad, yet guidelines are not published and training is optional with no tracking or competency check—this is not Activation-ready.
A proportionate control set is to:
HITL alone is not a substitute for user competence and consistent practices; the key is enforceable enablement with auditable evidence.
High-risk, regulated use with broad access requires enforced guidance and demonstrated competence before Activation, not optional awareness.
Topic: Framework Structure and Controls
A customer-services program wants to add a GenAI “agent assist” feature to draft responses for human agents. Go-live is in 6 weeks, and internal audit has flagged “low traceability of AI-influenced decisions” as a concern. The service is high visibility, but the AI will not send messages directly to customers (HITL required). The project already uses a standard business case and two stage gates: Design approval and Go-live approval.
Which approach best balances fast delivery with required AIPGF governance controls and familiar project governance concepts?
Best answer: C
What this tests: Framework Structure and Controls
Explanation: The best tradeoff is to embed AIPGF elements into the project’s existing governance so the AI change is traceable, reviewable, and auditable without stalling delivery. Updating the business case clarifies value and acceptable risk, while AI-specific controls (AI Assistance Plan, decision log) and stage-gate evidence provide proportionate assurance for a high-visibility service under HITL.
This scenario is about mapping AIPGF elements to familiar governance mechanisms: business case (value and risk appetite), stage gates (go/no-go decision points), controls (required ways of working and documentation), and assurance (independent confidence that controls work). With audit pressure and high visibility, the project needs AI-specific traceability, but the HITL constraint allows a proportionate approach rather than a full stop.
A balanced approach is to:
This preserves speed while meeting Transparency and Human-Centricity expectations through auditable decision rights and evidence.
It integrates AIPGF artifacts into existing business case, stage gates, controls, and assurance with proportionate evidence for auditability without creating a parallel process.
Topic: Lifecycle Governance (Initiate to Operate)
A benefits team is delivering a GenAI-assisted weekly performance report. After two sprints:
In the Foundation stage, what is the most likely underlying cause of this governance breakdown?
Best answer: A
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: The symptoms point to uncontrolled and inconsistent AI usage: shadow tools, missing traceability, and unclear accountability for AI-generated content. In AIPGF Foundation, the primary control is to establish explicit objectives and scope for AI assistance (including permitted tools, intended use, and what requires HITL review). Without that baseline, sign-off criteria and evidence expectations are impossible to enforce.
This is a Foundation-stage failure to define the objectives and scope of AI assistance (what AI will do, what it will not do, and under what conditions). When the scope is unclear, people optimize for speed by using whatever tools are available (shadow AI), and reviewers cannot determine what they are approving or what evidence should exist. A clear AI Assistance Plan (or equivalent) typically sets: intended tasks and boundaries, approved tools and data sources, required human checks and sign-offs (HITL), and minimum evidence/traceability (e.g., prompt/decision logging). Once those are defined, benefits tracking and tool selection can be assessed against agreed use-cases and acceptance criteria.
Without clear boundaries for what AI will and will not do, teams improvise, evidence is inconsistent, and HITL sign-off cannot be meaningfully applied.
Topic: Lifecycle Governance (Initiate to Operate)
In the APMG AI Project Governance Framework (AIPGF), which Principle most directly supports updating organizational guidance (for example, policies, templates, and approved tools) based on evaluation outcomes and lessons learned?
Best answer: A
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Updating policies, templates, and tool approvals after evaluating outcomes is an example of governance learning and continuous improvement. In AIPGF, that behavior is explicitly driven by the Adaptability Principle, which expects governance to evolve as evidence emerges across the lifecycle.
The concept being tested is the AIPGF Principle that governs how an organization responds to what it learns. During the Evaluation stage, teams validate whether the AI solution delivered the intended benefits and whether risks and controls performed as expected. Under the Adaptability Principle, those insights must be fed back into organizational guidance so future projects start with better defaults (for example, updated templates, revised approval criteria, or changes to the list of approved tools). This is how AIPGF makes governance iterative rather than static, strengthening assurance and delivery over time.
Key takeaway: evaluation outcomes should trigger updates to governance assets under Adaptability, not just a project-level closure report.
Adaptability emphasizes continuously refining governance, controls, and guidance based on what is learned during evaluation and operation.
Topic: Roles, Responsibilities, and Accountabilities
A bank used a third-party agency to help deliver a GenAI-assisted “complaints response” capability. During the Evaluation gate, Internal Audit asks for traceability evidence (who approved AI-assisted outputs, what prompts/models were used, and how vendor content was reviewed). The project team finds:
What is the most likely underlying cause of this governance breakdown?
Best answer: D
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The pattern points to a procurement and onboarding control gap: the vendor was not bound to provide traceability artifacts or follow the bank’s HITL approval process. When third parties contribute to AI-assisted deliverables, auditability must be designed into the contract and ways of working (e.g., required logs, evidence, and review/approval rights). Without those requirements, missing sign-offs and “proprietary” black-box creation are predictable outcomes.
This is primarily a third-party governance and procurement failure, not a documentation follow-up problem. When a vendor contributes to AI-assisted deliverables, auditability and traceability depend on upfront, enforceable requirements: what evidence must be produced (prompt/model/version history, data sources, review records), who has approval authority (HITL), and what tools/environments are permitted.
In practice, this is addressed by building governance controls into the SOW/contract and vendor onboarding, for example:
A model-quality issue or stakeholder attitude may explain dissatisfaction, but it does not explain the absence of an auditable trail and uncontrolled vendor AI usage.
Without explicit contractual evidence requirements and decision rights, the vendor can deliver AI-assisted work without a verifiable audit trail or approved HITL controls.
Practitioner readiness depends on choosing the best governance action under constraints, not just recognizing AIPGF terminology.
This page gives one complete public diagnostic. PM Mastery adds the larger AIPGF Practitioner bank, scenario drills, mixed timed mocks, progress tracking, and explanations that connect each answer to evidence, controls, decision rights, and stakeholder communication.
Retake only after you can explain each missed scenario in your own words. If the same governance failure repeats, drill that module before doing another 40-question run.
Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Practitioner guide on PMExams.com for concept review, then return here for PM Mastery practice.