Practice APMG AIPGF Practitioner with free sample questions, timed mock exams, and detailed explanations for governance, delivery, and change decisions.
AIPGF Practitioner (AIPGF-P) is APMG International’s scenario-based certification for applying the AI Project Governance Framework in realistic project situations. If you are looking for AIPGF Practitioner sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same account.
Start a practice session for APMG AI Project Governance Framework (AIPGF) Practitioner below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.
Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.
Compared with Foundation, Practitioner rewards slower, more deliberate scenario reading. Strong performance usually comes from identifying the missing control, role responsibility, lifecycle checkpoint, or assurance action before you look at the answer choices.
| Topic | Weight | Estimated questions |
|---|---|---|
| Module 1: Foundations of AI Project Governance (AIPGF) | 12% | 5 |
| Module 2: AI in Projects and Organizations (Context) | 12% | 5 |
| Module 3: Framework Structure and Controls | 13% | 5 |
| Module 4: Roles, Responsibilities, and Accountabilities | 13% | 5 |
| Module 5: Principles for Responsible and Trustworthy AI | 13% | 5 |
| Module 6: Values, Behaviours, and Culture | 12% | 5 |
| Module 7: Lifecycle Governance (Initiate to Operate) | 13% | 5 |
| Module 8: Assurance, Metrics, and Continuous Improvement | 12% | 5 |
If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .
These sample questions cover multiple blueprint areas for AIPGF Practitioner. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
During the Foundation stage, your organization is procuring a third-party GenAI tool to draft customer-support replies. The tool will process customer PII, and the business wants fast deployment with HITL review by agents. Internal audit requires contractual evidence of controls, service expectations, incident handling, and vendor transparency.
Which procurement requirement should the team AVOID?
Best answer: B
Explanation: Procurement for AI tools must build in governance: enforceable controls, measurable service levels, and clear incident obligations, plus transparency to support assurance and accountability. Accepting a black-box “as-is” service with no audit, documentation, or change notice undermines transparency and makes it difficult to manage risk and demonstrate appropriate oversight.
Topic: Module 3: Framework Structure and Controls
You are the assurance lead for a GenAI “email reply assistant” project preparing for an Activation-to-Evaluation go/no-go gate. The sponsor asks for a plain-language update to send to the steering committee.
Exhibit: Assurance gate note (excerpt)
Gate: Go/No-Go (Evaluation entry)
Overall: Conditional NO-GO
Finding F-03 (High): No documented HITL for customer-impacting replies
Evidence gap: AI Assistance Plan missing approval + decision log not updated
Required actions: Define HITL roles/thresholds; update AI Assistance Plan; record decisions
Owner: Product Owner + Service Ops | Due: before pilot launch
Which stakeholder communication best reflects the assurance finding and required actions in clear project language?
Best answer: D
Explanation: The exhibit states a conditional no-go, a high-severity control gap (no HITL for customer-impacting replies), and specific evidence gaps. The most effective stakeholder message converts that into a clear governance decision (pause/hold) and the concrete actions, owners, and timing needed to requalify for the gate. This is assurance reporting in project language: decision, impact, actions, and next checkpoint.
Topic: Module 3: Framework Structure and Controls
You are the AIPGF assurance lead for a GenAI-assisted claims triage tool approaching go-live (risk tier: High). In the Evaluation stage, your assurance review finds: incomplete AI Assistance Plan, inconsistent decision log for model/prompt changes, HITL override thresholds not evidenced, and limited transparency wording in user-facing guidance. The product owner wants a simple “pass/fail” message today to keep the launch date, but Internal Audit requires clear evidence and actions.
What is the BEST way to communicate the assurance findings and required actions to stakeholders?
Best answer: A
Explanation: Use project language that connects assurance evidence to delivery impact, decision rights, and concrete next steps. A short assurance brief with a clear status, required actions, named owners, and explicit go/no-go criteria supports Transparency while enabling efficient decision-making. It also preserves auditability by referencing the underlying evidence without overwhelming stakeholders.
Topic: Module 3: Framework Structure and Controls
A marketing operations project is using GenAI to draft outbound customer emails. At the end of the Foundation stage, the sponsor wants to move into Activation and has scheduled an internal assurance review in two weeks. To save time, the team skips the AIPGF AI Assistance Plan and keeps no formal record of where AI was used, who approved outputs, or what prompts/guardrails were applied.
What is the most likely near-term impact of this omission?
Best answer: D
Explanation: Skipping the AI Assistance Plan removes the project’s primary, stage-appropriate evidence of how AI use is controlled and approved. In AIPGF terms, it undermines Transparency and weakens assurance at the Foundation-to-Activation gate. The immediate consequence is that reviewers cannot verify decision rights, HITL checks, and control operation, so progress is likely paused until documentation exists.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A claims-processing project has an approved AI Assistance Plan for using GenAI to draft internal claim summaries (HITL: mandatory human review). During Activation, the product owner proposes a change: allow GenAI to draft customer denial letters and send them with only spot-check review to reduce cycle time.
As the project manager, what is the best next step to plan stakeholder engagement and communications for this AI usage change?
Best answer: C
Explanation: Moving from internal drafting to external customer communications is a significant AI usage decision that changes risk, accountability, and required transparency. The next step is to bring the right stakeholders together under agreed decision rights to review the proposed change, confirm the HITL approach, and plan who needs to be informed and how. This ensures the AI Assistance Plan and supporting logs are updated before any rollout.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A retail bank is in the Activation stage of a GenAI “agent assist” tool that drafts responses for call-center staff. Constraints: the project is rated high-risk, a regulator may request evidence of controls, HITL is mandatory (agents must approve every message), and go-live is in 6 weeks. The delivery team proposes that the product owner (who is also writing prompts and acceptance tests) should sign off the readiness gate to “save time.”
What is the BEST next governance action?
Best answer: C
Explanation: High-risk AI in Activation needs independent oversight at the governance gate, especially when auditability is required. The delivery team can move fast, but assurance/oversight approval should sit with a separate role that validates the evidence (e.g., HITL effectiveness, decision logging, and test results). This maintains clear decision rights without compromising time-to-deliver.
Topic: Module 2: AI in Projects and Organizations (Context)
A project team wants to speed up stakeholder communications. They propose an AI feature where a user pastes meeting notes and the tool produces a well-structured draft email in the organization’s preferred tone, which the project manager then edits and approves before sending.
Which AI discipline best describes this capability at a non-technical level?
Best answer: C
Explanation: The described feature is primarily about producing new, human-readable text (a draft email) from meeting notes and style guidance. That is the hallmark of generative AI: content creation with a human reviewer (HITL) for final approval. The other disciplines may support parts of the workflow but are not the best overall label for the capability.
Topic: Module 3: Framework Structure and Controls
A public-sector PMO is using an approved internal GenAI assistant to draft meeting minutes and summarize non-sensitive, already-anonymized project risks. The summaries are advisory only (no automated decisions), and stakeholders accept AI assistance.
However, an internal audit has required the team to demonstrate strict traceability: what AI was used, where, by whom, what was accepted/edited, and who approved AI-assisted outputs at each gate across the life cycle.
Which AIPGF action best addresses this need?
Best answer: A
Explanation: The dominant constraint is strict auditability, which requires demonstrable records of AI use and human accountability across the full project life cycle. An AI Assistance Plan plus decision logs and evidence retention provide a consistent, reviewable trail of prompts/outputs, edits, approvals, and decision rights at governance gates. This supports ethical and effective AI assistance while keeping delivery efficient through standardized controls.
Topic: Module 3: Framework Structure and Controls
In AIPGF, which set represents the minimum governance artifacts typically expected to demonstrate control over an AI-assisted deliverable (who used AI, what decisions were made, and what risks were managed)?
Best answer: D
Explanation: AIPGF expects a minimum evidence set that makes AI use auditable and decisions defensible. The AI Assistance Plan records how AI is used and controlled, the decision log captures accountable judgments and approvals, and the risk log shows identified AI risks and mitigations. Together they demonstrate baseline governance control without requiring extensive technical documentation.
Topic: Module 7: Lifecycle Governance (Initiate to Operate)
In the APMG AI Project Governance Framework (AIPGF), which core term best describes deliberately scaling project scope, evidence requirements, and review depth to match the AI use case’s risk, complexity, and organizational maturity—so delivery can move fast where appropriate without weakening governance?
Best answer: A
Explanation: This describes tailoring governance so controls and assurance evidence are proportionate to the specific AI project context. In AIPGF, that is captured by the Adaptability principle, which supports adjusting review depth and artifacts based on risk and maturity rather than applying one fixed governance package.
Topic: Module 3: Framework Structure and Controls
A product team is starting the Activation stage and must complete an AI Assistance Plan for using GenAI to accelerate customer-facing FAQ updates. The sponsor says, “Let AI draft it,” and wants release in 4 weeks, but has not specified whether AI text can be published as-is or only as a draft.
You open a Clarification Questions Log. Compliance notes they cannot assign a risk tier until the intended use and reliance on outputs are defined.
What is the FIRST clarification question you should record to prevent AI-generated assumptions in the key artifacts?
Best answer: B
Explanation: The Clarification Questions Log should start by fixing the intended use and reliance on AI outputs, including who is accountable for approving and releasing content. That single clarification anchors the scope of AI assistance, the required HITL controls, and the governance evidence you can produce in the AI Assistance Plan. Without it, teams tend to let the AI “fill gaps” and create unapproved assumptions in requirements and controls.
Topic: Module 1: Foundations of AI Project Governance (AIPGF)
A service desk project wants to use a “workflow bot” that reads incoming emails, queries the CRM, creates/updates tickets, and can issue customer refunds under $50 without waiting for a person. The team has only used an LLM copilot for drafting text before. You are in Foundation and must pass the governance gate before any pilot touches live customer data.
What is the best next step?
Best answer: B
Explanation: The described “workflow bot” is an AI agent because it can execute actions (ticket updates and refunds), not just generate text like an LLM or copilot. In Foundation, the right next step is to scope and govern that assistance in an AI Assistance Plan, including decision rights, access/permissions, HITL checkpoints, audit logging, and go/no-go approval before using live customer data.
Topic: Module 6: Values, Behaviours, and Culture
A public-sector PMO is running a programme of three GenAI copilots. After the first project’s Activation gate, internal assurance noted that evidence of HITL decision points and AI-assisted content approvals was inconsistent across teams, causing a two-day delay while records were reconstructed.
To protect delivery dates, the PMO decides to skip the programme-level retrospective/review and reuse the same governance pack unchanged for the next two projects.
What is the most likely near-term impact of this decision?
Best answer: C
Explanation: Skipping the retrospective removes the main mechanism for learning from the first gate delay and refining governance practices across the programme. In the near term, the same inconsistent evidence and unclear accountabilities are likely to show up again at the next assurance checkpoint, reducing auditability and slowing approvals.
Topic: Module 6: Values, Behaviours, and Culture
A customer service function has rolled out a GenAI assistant to draft responses for disputed billing queries. Within six weeks, an internal review finds: no recorded HITL approval for using AI-generated text in outbound emails, several agents are using a personal/public chatbot “to go faster,” evidence for fairness and tone testing is limited to informal notes, expected handling-time benefits are not materializing, and the union is pushing back that “customers will be misled.”
In AIPGF terms, what is the most likely underlying cause of this governance breakdown?
Best answer: C
Explanation: The pattern points to a culture and governance design gap: ethical concerns were not surfaced and resolved through structured governance discussions with documented decisions. When decision rights, acceptable-use boundaries, and required evidence are unclear, people optimize for speed, creating shadow AI and missing HITL approvals. That, in turn, triggers pushback and weak benefit realization.
Topic: Module 7: Lifecycle Governance (Initiate to Operate)
A bank has completed the Evaluation stage for a GenAI “agent assist” tool used by call-center staff. The post-implementation review found two recurring issues:
The project team updated their local QA checklist and added a mandatory decisionLog field for AI-assisted customer advice, then closed the project. However, they did not update organizational guidance (standard templates, the AI Assistance Plan template, or the list of approved monitoring tools).
A new, similar GenAI project starts next week and will use the organization’s standard artifacts. What is the most likely near-term impact of this omission?
Best answer: B
Explanation: Evaluation outcomes should be converted into updated organizational policies, templates, and approved tools so future projects inherit effective controls and evidence expectations. If that learning is not embedded, the next project will follow outdated standards and is likely to recreate known gaps (for example, HITL sampling and advice decision logging). This immediately increases risk exposure and weakens audit readiness.
Topic: Module 7: Lifecycle Governance (Initiate to Operate)
A bank has run a 12-week pilot of a GenAI-assisted customer service chatbot. As the project moves into closure, the project manager updates the Benefits Tracker with actual call-deflection and handling-time data, compares results to the baseline and forecast, and collects structured feedback from contact-center staff and customer representatives on service quality and trust. The results are summarized as evidence for a sustain/scale decision and for lessons learned.
This practice most directly maps to which AIPGF element?
Best answer: B
Explanation: The activity described is focused on verifying realized outcomes and benefits at closure using objective measures and stakeholder feedback. That is the core intent of the Evaluation stage: confirm whether expected value was achieved, understand variances and impacts, and use the evidence to decide whether to sustain, scale, or retire while capturing learning.
Topic: Module 1: Foundations of AI Project Governance (AIPGF)
A project uses a standard life cycle: initiate → plan → deliver → close. The PMO asks you to align AIPGF stages to these phases.
Exhibit: Gate checklist excerpt (must be passed before build starts)
Gate: G1 "AI governance readiness"
Evidence: AI use-case & success metrics agreed
Evidence: Risk tier assigned; decision rights/RACI confirmed
Evidence: AI Assistance Plan draft (intended use, limits, HITL)
Evidence: Transparency plan for user/customer communications
Go/No-Go: "Proceed into build and deployment"
Which mapping of AIPGF life cycle stages to the project life cycle best fits this gate placement?
Best answer: A
Explanation: The exhibit describes a pre-build governance readiness gate: agreeing success metrics, assigning risk tier and decision rights, drafting an AI Assistance Plan including HITL, and defining transparency communications. These are Foundation activities that should be completed during initiate/plan, before the project enters delivery work. Activation then aligns with deliver, and Evaluation aligns with close/post-implementation review.
Topic: Module 1: Foundations of AI Project Governance (AIPGF)
Two teams are starting GenAI initiatives using the same internal LLM platform and the same agile delivery method.
Other conditions are the same (no new vendor tool, no unusual data sensitivity, normal internal audit expectations).
Which governance activity or artifact is the MOST appropriate addition for Project B to preserve HITL and clear accountability?
Best answer: B
Explanation: Project B introduces automated execution of a high-impact decision (issuing refunds), so governance must explicitly define who is accountable and where humans must approve or intervene. A documented HITL approval workflow, with decision rights and logging, creates auditable evidence of control and supports safe escalation or rollback if the agent misbehaves.
Topic: Module 5: Principles for Responsible and Trustworthy AI
You are in the Activation stage of a GenAI project that drafts outbound customer emails for a retail bank. The solution is rated High risk because it influences customer decisions, and Compliance requires HITL approval before any message is sent. Internal Audit also requires auditability of who approved what and why. Marketing wants the pilot live in 10 business days, and the team cannot change the base model—only prompts and templates.
What is the BEST next governance action to increase stakeholder trust without unnecessarily slowing delivery?
Best answer: C
Explanation: The fastest way to build trust here is to make human accountability and audit evidence explicit before the pilot. Updating the AI Assistance Plan and implementing HITL approval plus decision logging directly addresses Compliance and Audit needs while staying within the constraint that only prompts/templates can be changed. This supports Transparency and Human-Centricity without adding avoidable delays.
Topic: Module 1: Foundations of AI Project Governance (AIPGF)
You are initiating a project to deploy a GenAI-powered customer-support assistant for a regulated retail bank. The team says governance should focus only on the vendor model, because “our staff will just use GenAI to draft content and summarise calls.” You are at the end of the Foundation stage and must prepare evidence for the go/no-go into Activation.
What is the best next step?
Best answer: D
Explanation: The immediate need is to define what AI assistance is actually being used and therefore what governance must cover. In AIPGF, that scoping is captured in the AI Assistance Plan during Foundation so decision rights, controls, and evidence can be tailored before work begins in Activation. Treating only the vendor model as “in scope” would miss staff usage of tools, prompts, data handling, and AI-assisted decisions/deliverables.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A claims team is in the Activation stage for a GenAI assistant that drafts customer letters. The team has an AI Assistance Plan, but it does not state who has decision rights for: (1) approving the GenAI tool configuration, (2) approving training/prompt data sources, and (3) the go/no-go to release to a pilot. Approvals have been happening informally in chat.
At the governance gate, Internal Audit asks for evidence of who approved the tool, data, and pilot release decision.
What is the most likely near-term impact of not defining and documenting these decision rights?
Best answer: D
Explanation: Not documenting decision rights (who approves the tool, data, and go/no-go) creates immediate ambiguity over accountability. When audit or a governance gate asks for evidence, the team cannot show a clear approver or decision record. The near-term result is a blocked or delayed gate and reduced trust in the governance process due to poor auditability.
Topic: Module 6: Values, Behaviours, and Culture
A public-sector program team used GenAI to draft citizen-facing guidance. An internal audit finds: no named owner for AI-assisted content, no HITL approval record before publication, and no decision log showing who accepted key risks. Several staff admit using personal accounts (“shadow AI”) to meet deadlines. Benefits reporting is disputed because baseline assumptions were never approved, and stakeholders are pushing back on trust and responsibility.
What is the most likely underlying cause of this governance breakdown?
Best answer: A
Explanation: The symptoms point to Accountability failure: no clear owner, no documented HITL approvals, and no traceable acceptance of AI-related decisions. When decision rights and approvals are not assigned and evidenced, teams will improvise (including shadow AI) and benefits/risk positions become contestable. The most likely root cause is missing ownership and decision documentation for AI-assisted work.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A retail bank is delivering a GenAI-assisted “agent co-pilot” for its contact centre. The team wants to use a third-party SaaS LLM and needs access to a dataset containing customer PII and call transcripts. The delivery timeline is tight, but internal audit will review the project’s tool onboarding and data-access decisions.
Which statement about who should approve the AI tool and data access is INCORRECT?
Best answer: D
Explanation: In AIPGF, approvals for AI tools and sensitive data access must sit with clearly accountable roles (e.g., security/technology authorities for tool onboarding and data owners/privacy for data use). This preserves auditability and enforces decision rights appropriate to risk. Allowing the delivery team to self-approve and connect to production PII is a governance anti-pattern.
Topic: Module 3: Framework Structure and Controls
You are conducting an Assurance review for a GenAI-assisted claims triage project preparing for an Activation go-live decision. Findings: several analysts used personal GenAI accounts to draft triage rationales (not in the AI Assistance Plan), HITL approval is inconsistently captured, evidence for data/provenance and prompt versions is missing, and the benefits tracker shows a shortfall versus the agreed baseline. The product owner challenges the findings as “documentation overhead.”
What is the most likely underlying cause of this governance breakdown?
Best answer: A
Explanation: The pattern points to a setup failure: assurance requirements (what evidence is needed) and decision rights (who must approve AI-assisted outputs) were not clearly established and socialised early. When that Foundation work is missing, people treat controls as optional “overhead,” use unapproved tools, and produce inconsistent HITL and evidence trails. The benefits shortfall then becomes harder to explain and correct credibly.