APMG AIPGF Practitioner Practice Test

Practice APMG AIPGF Practitioner with free sample questions, timed mock exams, and detailed explanations for governance, delivery, and change decisions.

AIPGF Practitioner (AIPGF-P) is APMG International’s scenario-based certification for applying the AI Project Governance Framework in realistic project situations. If you are looking for AIPGF Practitioner sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same account.

Interactive Practice Center

Start a practice session for APMG AI Project Governance Framework (AIPGF) Practitioner below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.

What this AIPGF Practitioner practice page gives you

  • A direct path into the PM Mastery simulator for AIPGF Practitioner.
  • Scenario-based drills that emphasize applied governance judgment, not just recall.
  • Mixed sets and timed practice built around practitioner-style decision making.
  • Detailed explanations that show why the strongest governance response is best.
  • A clear web-to-mobile continuation path with the same account.

AIPGF Practitioner exam snapshot

  • Vendor: APMG International
  • Official exam name: APMG AI Project Governance Framework (AIPGF) Practitioner
  • Exam code: AIPGF-P
  • Questions: 40
  • Time limit: 120 minutes
  • Recommended pace: about 3 minutes per question

Compared with Foundation, Practitioner rewards slower, more deliberate scenario reading. Strong performance usually comes from identifying the missing control, role responsibility, lifecycle checkpoint, or assurance action before you look at the answer choices.

Topic coverage for AIPGF Practitioner practice

TopicWeightEstimated questions
Module 1: Foundations of AI Project Governance (AIPGF)12%5
Module 2: AI in Projects and Organizations (Context)12%5
Module 3: Framework Structure and Controls13%5
Module 4: Roles, Responsibilities, and Accountabilities13%5
Module 5: Principles for Responsible and Trustworthy AI13%5
Module 6: Values, Behaviours, and Culture12%5
Module 7: Lifecycle Governance (Initiate to Operate)13%5
Module 8: Assurance, Metrics, and Continuous Improvement12%5

How to use the AIPGF Practitioner simulator efficiently

  1. Start with scenario-based drills focused on one module at a time.
  2. Review every explanation until you can defend why the best answer is the best governance action, not just why it looks familiar.
  3. Move into mixed sets once you are comfortable switching between lifecycle, responsibility, control, and assurance decisions.
  4. Finish with full timed runs to rehearse long-form scenario pacing and answer discipline.

Free preview vs premium

  • Free preview: a smaller set on web so you can validate the scenario style and explanation depth.
  • Premium: the full AIPGF Practitioner practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .

24 AIPGF Practitioner sample questions with detailed explanations

These sample questions cover multiple blueprint areas for AIPGF Practitioner. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.

Question 1

Topic: Module 4: Roles, Responsibilities, and Accountabilities

During the Foundation stage, your organization is procuring a third-party GenAI tool to draft customer-support replies. The tool will process customer PII, and the business wants fast deployment with HITL review by agents. Internal audit requires contractual evidence of controls, service expectations, incident handling, and vendor transparency.

Which procurement requirement should the team AVOID?

  • A. Mandate incident notification timelines and joint post-incident review
  • B. Accept “as-is” black-box service; no audit, documentation, or change notice
  • C. Contractually require data segregation, retention limits, and approved subprocessors
  • D. Define SLAs for availability, latency, and support escalation

Best answer: B

Explanation: Procurement for AI tools must build in governance: enforceable controls, measurable service levels, and clear incident obligations, plus transparency to support assurance and accountability. Accepting a black-box “as-is” service with no audit, documentation, or change notice undermines transparency and makes it difficult to manage risk and demonstrate appropriate oversight.


Question 2

Topic: Module 3: Framework Structure and Controls

You are the assurance lead for a GenAI “email reply assistant” project preparing for an Activation-to-Evaluation go/no-go gate. The sponsor asks for a plain-language update to send to the steering committee.

Exhibit: Assurance gate note (excerpt)

Gate: Go/No-Go (Evaluation entry)
Overall: Conditional NO-GO
Finding F-03 (High): No documented HITL for customer-impacting replies
Evidence gap: AI Assistance Plan missing approval + decision log not updated
Required actions: Define HITL roles/thresholds; update AI Assistance Plan; record decisions
Owner: Product Owner + Service Ops | Due: before pilot launch

Which stakeholder communication best reflects the assurance finding and required actions in clear project language?

  • A. Report that the model’s transformer architecture lacks guardrails, so it is unsafe
  • B. State that assurance is ongoing and ask stakeholders to be patient for updates
  • C. Proceed with the pilot because issues are documentation-only and can be fixed later
  • D. Recommend pausing the pilot until HITL is defined and evidence is approved

Best answer: D

Explanation: The exhibit states a conditional no-go, a high-severity control gap (no HITL for customer-impacting replies), and specific evidence gaps. The most effective stakeholder message converts that into a clear governance decision (pause/hold) and the concrete actions, owners, and timing needed to requalify for the gate. This is assurance reporting in project language: decision, impact, actions, and next checkpoint.


Question 3

Topic: Module 3: Framework Structure and Controls

You are the AIPGF assurance lead for a GenAI-assisted claims triage tool approaching go-live (risk tier: High). In the Evaluation stage, your assurance review finds: incomplete AI Assistance Plan, inconsistent decision log for model/prompt changes, HITL override thresholds not evidenced, and limited transparency wording in user-facing guidance. The product owner wants a simple “pass/fail” message today to keep the launch date, but Internal Audit requires clear evidence and actions.

What is the BEST way to communicate the assurance findings and required actions to stakeholders?

  • A. Send one-page assurance brief: impacts, actions, owners, go/no-go.
  • B. Share the full evidence pack and let stakeholders interpret it.
  • C. Escalate to pause delivery until all documentation is perfect.
  • D. Approve launch and capture findings as backlog items for later.

Best answer: A

Explanation: Use project language that connects assurance evidence to delivery impact, decision rights, and concrete next steps. A short assurance brief with a clear status, required actions, named owners, and explicit go/no-go criteria supports Transparency while enabling efficient decision-making. It also preserves auditability by referencing the underlying evidence without overwhelming stakeholders.


Question 4

Topic: Module 3: Framework Structure and Controls

A marketing operations project is using GenAI to draft outbound customer emails. At the end of the Foundation stage, the sponsor wants to move into Activation and has scheduled an internal assurance review in two weeks. To save time, the team skips the AIPGF AI Assistance Plan and keeps no formal record of where AI was used, who approved outputs, or what prompts/guardrails were applied.

What is the most likely near-term impact of this omission?

  • A. Benefits realization declines next year due to reduced adaptability
  • B. Customer complaints spike because the model will drift quickly
  • C. The project is automatically non-compliant with all privacy laws
  • D. Stage-gate approval is delayed due to weak audit evidence

Best answer: D

Explanation: Skipping the AI Assistance Plan removes the project’s primary, stage-appropriate evidence of how AI use is controlled and approved. In AIPGF terms, it undermines Transparency and weakens assurance at the Foundation-to-Activation gate. The immediate consequence is that reviewers cannot verify decision rights, HITL checks, and control operation, so progress is likely paused until documentation exists.


Question 5

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A claims-processing project has an approved AI Assistance Plan for using GenAI to draft internal claim summaries (HITL: mandatory human review). During Activation, the product owner proposes a change: allow GenAI to draft customer denial letters and send them with only spot-check review to reduce cycle time.

As the project manager, what is the best next step to plan stakeholder engagement and communications for this AI usage change?

  • A. Update the AI Assistance Plan after deployment to reflect the new use
  • B. Announce the change to operations and begin rollout next sprint
  • C. Convene a governance decision meeting to agree approvals and communications
  • D. Run the change quietly and brief stakeholders after initial monitoring results

Best answer: C

Explanation: Moving from internal drafting to external customer communications is a significant AI usage decision that changes risk, accountability, and required transparency. The next step is to bring the right stakeholders together under agreed decision rights to review the proposed change, confirm the HITL approach, and plan who needs to be informed and how. This ensures the AI Assistance Plan and supporting logs are updated before any rollout.


Question 6

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A retail bank is in the Activation stage of a GenAI “agent assist” tool that drafts responses for call-center staff. Constraints: the project is rated high-risk, a regulator may request evidence of controls, HITL is mandatory (agents must approve every message), and go-live is in 6 weeks. The delivery team proposes that the product owner (who is also writing prompts and acceptance tests) should sign off the readiness gate to “save time.”

What is the BEST next governance action?

  • A. Allow the product owner to self-approve if HITL is enabled in the workflow
  • B. Ask the GenAI vendor to certify readiness and provide the audit pack
  • C. Assign an independent assurance role to approve the readiness gate and require delivery evidence
  • D. Defer formal assurance until the Evaluation stage to avoid delaying go-live

Best answer: C

Explanation: High-risk AI in Activation needs independent oversight at the governance gate, especially when auditability is required. The delivery team can move fast, but assurance/oversight approval should sit with a separate role that validates the evidence (e.g., HITL effectiveness, decision logging, and test results). This maintains clear decision rights without compromising time-to-deliver.


Question 7

Topic: Module 2: AI in Projects and Organizations (Context)

A project team wants to speed up stakeholder communications. They propose an AI feature where a user pastes meeting notes and the tool produces a well-structured draft email in the organization’s preferred tone, which the project manager then edits and approves before sending.

Which AI discipline best describes this capability at a non-technical level?

  • A. Machine learning (predictive/classification)
  • B. Natural language processing (NLP)
  • C. Generative AI
  • D. Robotic process automation (RPA)

Best answer: C

Explanation: The described feature is primarily about producing new, human-readable text (a draft email) from meeting notes and style guidance. That is the hallmark of generative AI: content creation with a human reviewer (HITL) for final approval. The other disciplines may support parts of the workflow but are not the best overall label for the capability.


Question 8

Topic: Module 3: Framework Structure and Controls

A public-sector PMO is using an approved internal GenAI assistant to draft meeting minutes and summarize non-sensitive, already-anonymized project risks. The summaries are advisory only (no automated decisions), and stakeholders accept AI assistance.

However, an internal audit has required the team to demonstrate strict traceability: what AI was used, where, by whom, what was accepted/edited, and who approved AI-assisted outputs at each gate across the life cycle.

Which AIPGF action best addresses this need?

  • A. Implement an AI Assistance Plan with mandatory decision logging and evidence retention at each gate
  • B. Run stakeholder transparency workshops to rebuild trust in AI-assisted delivery
  • C. Apply data minimisation and redaction rules before any prompts are submitted
  • D. Add a HITL approval step only for high-impact business decisions

Best answer: A

Explanation: The dominant constraint is strict auditability, which requires demonstrable records of AI use and human accountability across the full project life cycle. An AI Assistance Plan plus decision logs and evidence retention provide a consistent, reviewable trail of prompts/outputs, edits, approvals, and decision rights at governance gates. This supports ethical and effective AI assistance while keeping delivery efficient through standardized controls.


Question 9

Topic: Module 3: Framework Structure and Controls

In AIPGF, which set represents the minimum governance artifacts typically expected to demonstrate control over an AI-assisted deliverable (who used AI, what decisions were made, and what risks were managed)?

  • A. Model card, prompt library, and training data sheet
  • B. Business case, benefits register, and KPI dashboard
  • C. Project charter, communications plan, and RAID log
  • D. AI Assistance Plan, decision log, and risk log

Best answer: D

Explanation: AIPGF expects a minimum evidence set that makes AI use auditable and decisions defensible. The AI Assistance Plan records how AI is used and controlled, the decision log captures accountable judgments and approvals, and the risk log shows identified AI risks and mitigations. Together they demonstrate baseline governance control without requiring extensive technical documentation.


Question 10

Topic: Module 7: Lifecycle Governance (Initiate to Operate)

In the APMG AI Project Governance Framework (AIPGF), which core term best describes deliberately scaling project scope, evidence requirements, and review depth to match the AI use case’s risk, complexity, and organizational maturity—so delivery can move fast where appropriate without weakening governance?

  • A. Adaptability (principle)
  • B. Transparency (principle)
  • C. Human-in-the-Loop (HITL)
  • D. Human-Centricity (principle)

Best answer: A

Explanation: This describes tailoring governance so controls and assurance evidence are proportionate to the specific AI project context. In AIPGF, that is captured by the Adaptability principle, which supports adjusting review depth and artifacts based on risk and maturity rather than applying one fixed governance package.


Question 11

Topic: Module 3: Framework Structure and Controls

A product team is starting the Activation stage and must complete an AI Assistance Plan for using GenAI to accelerate customer-facing FAQ updates. The sponsor says, “Let AI draft it,” and wants release in 4 weeks, but has not specified whether AI text can be published as-is or only as a draft.

You open a Clarification Questions Log. Compliance notes they cannot assign a risk tier until the intended use and reliance on outputs are defined.

What is the FIRST clarification question you should record to prevent AI-generated assumptions in the key artifacts?

  • A. What data will be used, and do we have rights to use it?
  • B. Which tasks/decisions will GenAI support, and who approves outputs?
  • C. What acceptance criteria will be used to judge GenAI draft quality?
  • D. What risk tier will this initiative be classified as?

Best answer: B

Explanation: The Clarification Questions Log should start by fixing the intended use and reliance on AI outputs, including who is accountable for approving and releasing content. That single clarification anchors the scope of AI assistance, the required HITL controls, and the governance evidence you can produce in the AI Assistance Plan. Without it, teams tend to let the AI “fill gaps” and create unapproved assumptions in requirements and controls.


Question 12

Topic: Module 1: Foundations of AI Project Governance (AIPGF)

A service desk project wants to use a “workflow bot” that reads incoming emails, queries the CRM, creates/updates tickets, and can issue customer refunds under $50 without waiting for a person. The team has only used an LLM copilot for drafting text before. You are in Foundation and must pass the governance gate before any pilot touches live customer data.

What is the best next step?

  • A. Start a limited live pilot and add controls after stabilising
  • B. Classify it as an AI agent and update the AI Assistance Plan
  • C. Set up operational monitoring dashboards for drift and incidents
  • D. Proceed directly to post-implementation evaluation and lessons learned

Best answer: B

Explanation: The described “workflow bot” is an AI agent because it can execute actions (ticket updates and refunds), not just generate text like an LLM or copilot. In Foundation, the right next step is to scope and govern that assistance in an AI Assistance Plan, including decision rights, access/permissions, HITL checkpoints, audit logging, and go/no-go approval before using live customer data.


Question 13

Topic: Module 6: Values, Behaviours, and Culture

A public-sector PMO is running a programme of three GenAI copilots. After the first project’s Activation gate, internal assurance noted that evidence of HITL decision points and AI-assisted content approvals was inconsistent across teams, causing a two-day delay while records were reconstructed.

To protect delivery dates, the PMO decides to skip the programme-level retrospective/review and reuse the same governance pack unchanged for the next two projects.

What is the most likely near-term impact of this decision?

  • A. A production data breach becomes inevitable within the next sprint
  • B. The organisation’s innovation culture will erode over several years
  • C. The next assurance gate is likely delayed by repeated evidence gaps
  • D. Regulators will issue fines within weeks due to non-compliance

Best answer: C

Explanation: Skipping the retrospective removes the main mechanism for learning from the first gate delay and refining governance practices across the programme. In the near term, the same inconsistent evidence and unclear accountabilities are likely to show up again at the next assurance checkpoint, reducing auditability and slowing approvals.


Question 14

Topic: Module 6: Values, Behaviours, and Culture

A customer service function has rolled out a GenAI assistant to draft responses for disputed billing queries. Within six weeks, an internal review finds: no recorded HITL approval for using AI-generated text in outbound emails, several agents are using a personal/public chatbot “to go faster,” evidence for fairness and tone testing is limited to informal notes, expected handling-time benefits are not materializing, and the union is pushing back that “customers will be misled.”

In AIPGF terms, what is the most likely underlying cause of this governance breakdown?

  • A. Stakeholders are resistant to change, so adoption communications were insufficient
  • B. The HITL approver was too busy, so the sign-off step was skipped
  • C. Ethical guardrails and decision rights were never agreed and documented (e.g., no AI Assistance Plan/decision log), so teams defaulted to ad hoc use
  • D. The model’s performance is inadequate, so it needs further tuning before benefits appear

Best answer: C

Explanation: The pattern points to a culture and governance design gap: ethical concerns were not surfaced and resolved through structured governance discussions with documented decisions. When decision rights, acceptable-use boundaries, and required evidence are unclear, people optimize for speed, creating shadow AI and missing HITL approvals. That, in turn, triggers pushback and weak benefit realization.


Question 15

Topic: Module 7: Lifecycle Governance (Initiate to Operate)

A bank has completed the Evaluation stage for a GenAI “agent assist” tool used by call-center staff. The post-implementation review found two recurring issues:

  • The tool sometimes produced confident but incorrect fee explanations.
  • Supervisors reduced Human-in-the-Loop (HITL) spot checks to meet handling-time targets.

The project team updated their local QA checklist and added a mandatory decisionLog field for AI-assisted customer advice, then closed the project. However, they did not update organizational guidance (standard templates, the AI Assistance Plan template, or the list of approved monitoring tools).

A new, similar GenAI project starts next week and will use the organization’s standard artifacts. What is the most likely near-term impact of this omission?

  • A. The bank will be unable to realize any benefits from GenAI until it replaces the model vendor.
  • B. The new project is likely to reuse outdated templates, reducing auditability and increasing near-term risk exposure through inconsistent controls.
  • C. Stakeholder trust will collapse across all AI initiatives because the bank closed the project.
  • D. The organization will inevitably face regulatory penalties for past customer interactions.

Best answer: B

Explanation: Evaluation outcomes should be converted into updated organizational policies, templates, and approved tools so future projects inherit effective controls and evidence expectations. If that learning is not embedded, the next project will follow outdated standards and is likely to recreate known gaps (for example, HITL sampling and advice decision logging). This immediately increases risk exposure and weakens audit readiness.


Question 16

Topic: Module 7: Lifecycle Governance (Initiate to Operate)

A bank has run a 12-week pilot of a GenAI-assisted customer service chatbot. As the project moves into closure, the project manager updates the Benefits Tracker with actual call-deflection and handling-time data, compares results to the baseline and forecast, and collects structured feedback from contact-center staff and customer representatives on service quality and trust. The results are summarized as evidence for a sustain/scale decision and for lessons learned.

This practice most directly maps to which AIPGF element?

  • A. Foundation stage intent: define benefits and success measures
  • B. Evaluation stage intent: validate benefits/outcomes using evidence
  • C. Activation stage intent: implement the solution and controls
  • D. Transparency principle: explain how the model makes decisions

Best answer: B

Explanation: The activity described is focused on verifying realized outcomes and benefits at closure using objective measures and stakeholder feedback. That is the core intent of the Evaluation stage: confirm whether expected value was achieved, understand variances and impacts, and use the evidence to decide whether to sustain, scale, or retire while capturing learning.


Question 17

Topic: Module 1: Foundations of AI Project Governance (AIPGF)

A project uses a standard life cycle: initiate → plan → deliver → close. The PMO asks you to align AIPGF stages to these phases.

Exhibit: Gate checklist excerpt (must be passed before build starts)

Gate: G1 "AI governance readiness"
Evidence: AI use-case & success metrics agreed
Evidence: Risk tier assigned; decision rights/RACI confirmed
Evidence: AI Assistance Plan draft (intended use, limits, HITL)
Evidence: Transparency plan for user/customer communications
Go/No-Go: "Proceed into build and deployment"

Which mapping of AIPGF life cycle stages to the project life cycle best fits this gate placement?

  • A. Foundation: initiate/plan; Activation: deliver; Evaluation: close
  • B. Foundation: plan; Activation: deliver/close; Evaluation: initiate
  • C. Foundation: initiate/plan/deliver; Activation: close; Evaluation: ongoing only
  • D. Foundation: initiate; Activation: plan/deliver; Evaluation: close

Best answer: A

Explanation: The exhibit describes a pre-build governance readiness gate: agreeing success metrics, assigning risk tier and decision rights, drafting an AI Assistance Plan including HITL, and defining transparency communications. These are Foundation activities that should be completed during initiate/plan, before the project enters delivery work. Activation then aligns with deliver, and Evaluation aligns with close/post-implementation review.


Question 18

Topic: Module 1: Foundations of AI Project Governance (AIPGF)

Two teams are starting GenAI initiatives using the same internal LLM platform and the same agile delivery method.

  • Project A: an AI assistant suggests wording for internal marketing emails; a human always edits before sending.
  • Project B: an AI agent can approve and execute customer refunds up to $500 in the billing system; the product owner wants “straight-through processing” to reduce call volume.

Other conditions are the same (no new vendor tool, no unusual data sensitivity, normal internal audit expectations).

Which governance activity or artifact is the MOST appropriate addition for Project B to preserve HITL and clear accountability?

  • A. Perform vendor assurance review and contract clauses for AI tool
  • B. Define a HITL approval workflow and decision-rights log for refunds
  • C. Run a detailed data privacy assessment and minimisation plan
  • D. Publish a public-facing model card and stakeholder FAQ

Best answer: B

Explanation: Project B introduces automated execution of a high-impact decision (issuing refunds), so governance must explicitly define who is accountable and where humans must approve or intervene. A documented HITL approval workflow, with decision rights and logging, creates auditable evidence of control and supports safe escalation or rollback if the agent misbehaves.


Question 19

Topic: Module 5: Principles for Responsible and Trustworthy AI

You are in the Activation stage of a GenAI project that drafts outbound customer emails for a retail bank. The solution is rated High risk because it influences customer decisions, and Compliance requires HITL approval before any message is sent. Internal Audit also requires auditability of who approved what and why. Marketing wants the pilot live in 10 business days, and the team cannot change the base model—only prompts and templates.

What is the BEST next governance action to increase stakeholder trust without unnecessarily slowing delivery?

  • A. Pause launch for full third-party audit and model retraining.
  • B. Provide stakeholders all prompt transcripts, including customer data, for transparency.
  • C. Update AI Assistance Plan; enforce HITL approval and decision logging.
  • D. Go live now; use complaints to adjust prompts later.

Best answer: C

Explanation: The fastest way to build trust here is to make human accountability and audit evidence explicit before the pilot. Updating the AI Assistance Plan and implementing HITL approval plus decision logging directly addresses Compliance and Audit needs while staying within the constraint that only prompts/templates can be changed. This supports Transparency and Human-Centricity without adding avoidable delays.


Question 20

Topic: Module 1: Foundations of AI Project Governance (AIPGF)

You are initiating a project to deploy a GenAI-powered customer-support assistant for a regulated retail bank. The team says governance should focus only on the vendor model, because “our staff will just use GenAI to draft content and summarise calls.” You are at the end of the Foundation stage and must prepare evidence for the go/no-go into Activation.

What is the best next step?

  • A. Begin live monitoring KPIs and incident response runbooks
  • B. Run an end-to-end evaluation and benefits realisation review
  • C. Approve tool use and start prompt engineering in delivery
  • D. Complete the AI Assistance Plan scope and inventory

Best answer: D

Explanation: The immediate need is to define what AI assistance is actually being used and therefore what governance must cover. In AIPGF, that scoping is captured in the AI Assistance Plan during Foundation so decision rights, controls, and evidence can be tailored before work begins in Activation. Treating only the vendor model as “in scope” would miss staff usage of tools, prompts, data handling, and AI-assisted decisions/deliverables.


Question 21

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A claims team is in the Activation stage for a GenAI assistant that drafts customer letters. The team has an AI Assistance Plan, but it does not state who has decision rights for: (1) approving the GenAI tool configuration, (2) approving training/prompt data sources, and (3) the go/no-go to release to a pilot. Approvals have been happening informally in chat.

At the governance gate, Internal Audit asks for evidence of who approved the tool, data, and pilot release decision.

What is the most likely near-term impact of not defining and documenting these decision rights?

  • A. A regulatory penalty is issued because the model is already non-compliant in production
  • B. Customer trust erodes primarily due to gradual model drift over several months
  • C. Benefits realization improves because fewer people are involved in approvals
  • D. The gate review is paused because approval ownership and an auditable decision trail cannot be evidenced

Best answer: D

Explanation: Not documenting decision rights (who approves the tool, data, and go/no-go) creates immediate ambiguity over accountability. When audit or a governance gate asks for evidence, the team cannot show a clear approver or decision record. The near-term result is a blocked or delayed gate and reduced trust in the governance process due to poor auditability.


Question 22

Topic: Module 6: Values, Behaviours, and Culture

A public-sector program team used GenAI to draft citizen-facing guidance. An internal audit finds: no named owner for AI-assisted content, no HITL approval record before publication, and no decision log showing who accepted key risks. Several staff admit using personal accounts (“shadow AI”) to meet deadlines. Benefits reporting is disputed because baseline assumptions were never approved, and stakeholders are pushing back on trust and responsibility.

What is the most likely underlying cause of this governance breakdown?

  • A. Accountabilities and decision rights for AI use and approvals were not defined and recorded
  • B. The team skipped the Evaluation stage and focused only on delivery speed
  • C. Stakeholder resistance was driven primarily by a lack of technical model transparency
  • D. Shadow AI occurred mainly due to insufficient tool access and time pressure

Best answer: A

Explanation: The symptoms point to Accountability failure: no clear owner, no documented HITL approvals, and no traceable acceptance of AI-related decisions. When decision rights and approvals are not assigned and evidenced, teams will improvise (including shadow AI) and benefits/risk positions become contestable. The most likely root cause is missing ownership and decision documentation for AI-assisted work.


Question 23

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A retail bank is delivering a GenAI-assisted “agent co-pilot” for its contact centre. The team wants to use a third-party SaaS LLM and needs access to a dataset containing customer PII and call transcripts. The delivery timeline is tight, but internal audit will review the project’s tool onboarding and data-access decisions.

Which statement about who should approve the AI tool and data access is INCORRECT?

  • A. Use the AI governance gate to confirm risk tier, required controls, and decision rights
  • B. Have the data owner (with privacy input) approve data access scope and permitted uses
  • C. Have the organisation’s security/technology authority approve the GenAI tool onboarding
  • D. Let the delivery team self-approve the GenAI tool and connect it to production data

Best answer: D

Explanation: In AIPGF, approvals for AI tools and sensitive data access must sit with clearly accountable roles (e.g., security/technology authorities for tool onboarding and data owners/privacy for data use). This preserves auditability and enforces decision rights appropriate to risk. Allowing the delivery team to self-approve and connect to production PII is a governance anti-pattern.


Question 24

Topic: Module 3: Framework Structure and Controls

You are conducting an Assurance review for a GenAI-assisted claims triage project preparing for an Activation go-live decision. Findings: several analysts used personal GenAI accounts to draft triage rationales (not in the AI Assistance Plan), HITL approval is inconsistently captured, evidence for data/provenance and prompt versions is missing, and the benefits tracker shows a shortfall versus the agreed baseline. The product owner challenges the findings as “documentation overhead.”

What is the most likely underlying cause of this governance breakdown?

  • A. Assurance expectations and decision rights for AI-assisted work were never agreed and communicated in Foundation
  • B. The team is experiencing schedule pressure and is skipping required governance steps
  • C. The GenAI solution lacks technical logging features to capture prompts and approvals
  • D. The model’s limited accuracy is the primary driver of weak evidence and stakeholder pushback

Best answer: A

Explanation: The pattern points to a setup failure: assurance requirements (what evidence is needed) and decision rights (who must approve AI-assisted outputs) were not clearly established and socialised early. When that Foundation work is missing, people treat controls as optional “overhead,” use unapproved tools, and produce inconsistent HITL and evidence trails. The benefits shortfall then becomes harder to explain and correct credibly.

Revised on Sunday, April 26, 2026