Free AIPGF Practitioner Full-Length Practice Exam: 40 Questions

Try 40 free AIPGF Practitioner questions across the exam domains, with answers and explanations, then continue in PM Mastery.

This free full-length AIPGF Practitioner practice exam includes 40 original PM Mastery questions across the exam domains.

The questions are original PM Mastery practice questions aligned to the exam outline. They are not official exam questions and are not copied from any exam sponsor.

Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some exam sponsors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

For concept review before or after this set, use the AIPGF Practitioner guide on PMExams.com.

How to run this diagnostic

Set a 40-minute timer and answer the 40 questions as applied governance scenarios. Before opening explanations, decide which control, role, gate, evidence gap, or stakeholder action the scenario is really testing.

Suggested timing checkpoints:

Question rangeTarget elapsed time
1-1010 minutes
11-2020 minutes
21-3030 minutes
31-4040 minutes

Exam snapshot

ItemDetail
IssuerAPMG International
Exam routeAIPGF Practitioner
Official exam nameAPMG AI Project Governance Framework (AIPGF) Practitioner
Full-length set on this page40 questions
Exam time120 minutes
Topic areas represented8

Full-length exam mix

TopicApproximate official weightQuestions used
Foundations of AI Project Governance (AIPGF)12%5
AI in Projects and Organizations (Context)12%5
Framework Structure and Controls13%5
Roles, Responsibilities, and Accountabilities13%5
Principles for Responsible and Trustworthy AI13%5
Values, Behaviours, and Culture12%5
Lifecycle Governance (Initiate to Operate)13%5
Assurance, Metrics, and Continuous Improvement12%5

Practice questions

Questions 1-25

Question 1

Topic: Values, Behaviours, and Culture

A project team is rolling out a GenAI assistant to help case workers draft responses to customer complaints. The pilot has strong executive sponsorship, and the approved tool is available, but governance is breaking down:

  • HITL sign-off is frequently skipped because assigned reviewers “don’t want their name on it”
  • Staff copy prompts into personal accounts to avoid the formal workflow
  • The AI Assistance Plan and decision log are mostly empty, so evidence for assurance is weak
  • Benefits are below forecast because outputs are heavily reworked and stakeholders are challenging the approach

What is the most likely underlying source of resistance driving these behaviors?

  • A. The team has not produced enough assurance artifacts, so adoption is stalling
  • B. Unclear accountability and decision rights for AI-assisted outputs, creating fear of personal liability
  • C. The approved GenAI tool is not accessible, forcing staff to use personal accounts
  • D. The pilot is under-delivering benefits, so stakeholders are pushing back

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: The consistent pattern is avoidance of formal review and traceability because reviewers fear being held responsible for AI-influenced decisions. In AIPGF terms, unclear decision rights and accountabilities undermine HITL, reduce transparency, and drive shadow AI usage as a self-protection behavior.

A common source of resistance to AI adoption is perceived personal risk: people are asked to “own” AI-assisted outputs without clear, supported decision rights (who approves what, what “good” looks like, and what protection/escalation exists). In the scenario, reviewers explicitly avoid having their name attached, which directly explains skipped HITL sign-off, weak evidence (empty logs/plans), and shadow AI use to bypass traceable governance. This is a culture-and-governance resistance pattern, not primarily a technology availability issue.

Effective mitigation is to clarify accountabilities and decision boundaries for HITL, define acceptance criteria and escalation paths, and reinforce Transparency (documented decisions/evidence) so staff can adopt the tool without feeling exposed.

When people feel exposed and lack clear decision rights, they avoid HITL sign-off and shift to shadow AI to reduce perceived blame.


Question 2

Topic: Roles, Responsibilities, and Accountabilities

You are delivering a GenAI-assisted customer-support content project. A call-center stakeholder group has raised concerns about “AI talking to customers.” Your sponsor asks whether anything needs to change before the next release.

Exhibit: AI Assistance Plan (v0.8) — delivery excerpt

Use: GenAI drafts customer emails + knowledge-base articles
HITL: Agent reviews & approves before sending/publishing
Benefit target: -30% authoring time; consistent tone
Red flag observed (last 2 sprints): 3 factual errors reached QA;
1 near-miss email would have mis-stated refund policy
Disclosure: External comms currently do not mention AI assistance
Stakeholders: Call-center leads sensitive to "AI answering customers"

What is the best next governance action supported by the exhibit to maintain stakeholder trust during delivery?

  • A. Send all stakeholders the prompt library and QA defect details so they can independently assess the risk
  • B. Keep the near-miss internal and only report benefits to stakeholders to prevent unnecessary concern
  • C. Issue a stakeholder update that discloses AI use, quantifies benefits, and explains the red flags and HITL controls, then record agreed messaging and escalation in the decision log
  • D. Stop using GenAI until the next governance gate to avoid reputational risk

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The exhibit shows real delivery benefits alongside a near-miss and a current disclosure gap for external communications. To maintain trust, governance should proactively and clearly communicate where AI is used, what value it is providing, what went wrong or almost went wrong, and what HITL controls prevent customer impact. Capturing the agreed narrative and escalation path preserves accountability and auditability.

During delivery (Activation), trust is maintained by communicating AI usage in a balanced, stakeholder-appropriate way: benefits, limitations, and the controls that keep humans accountable for outcomes. Here, the near-miss and “no disclosure” line are red flags because stakeholders are already sensitive to AI in customer contexts.

A good governance response is to:

  • Disclose where AI is used (drafting) and where it is not (humans approve)
  • Share evidence of benefits (progress vs the -30% target)
  • Acknowledge incidents/near-misses and the strengthened checks (HITL, QA gates)
  • Agree the external messaging, decision rights, and escalation triggers, and record them

This is preferable to either hiding issues (erodes trust) or flooding stakeholders with overly technical artifacts that don’t address their core concerns.

This applies Transparency and Human-Centricity by proactively communicating both value and limitations (near-miss) plus the human oversight controls that manage risk and protect trust.


Question 3

Topic: Foundations of AI Project Governance (AIPGF)

A retail bank based in the EU wants to use a GenAI-enabled service to pre-screen job applicants and produce a ranked shortlist for hiring managers. The project team has identified a vendor tool and drafted a high-level workflow, and leadership is pushing to start a pilot in 6 weeks.

As the AIPGF Practitioner, what is the best next step to align governance expectations where regulation (e.g., the EU AI Act) may apply?

  • A. Start the pilot and capture issues in an evaluation log for later remediation
  • B. Finalize the vendor contract first, then tailor governance controls once the tool is fixed
  • C. Implement monitoring dashboards now and plan regulatory documentation after go-live
  • D. Perform an EU AI Act risk classification and set the required documentation and oversight gates

Best answer: D

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Because the use case is hiring-related in the EU, regulatory expectations may be triggered and must be addressed during Foundation before Activation begins. The next step is to classify the AI system’s risk and translate that into governance requirements (documentation, decision rights, and oversight such as HITL) and explicit go/no-go gates. This prevents a pilot from proceeding without prerequisite evidence and approvals.

In AIPGF, regulations influence governance expectations most directly at the point you define the governance approach for the use case. For an EU hiring shortlisting system, the appropriate next step in Foundation is to determine the likely regulatory risk classification (e.g., whether it is treated as “high-risk”) and then tailor governance accordingly before any pilot.

This typically means:

  • Record the classification rationale in the decision log and AI Assistance Plan
  • Define required evidence (documentation pack, traceability, data and bias considerations)
  • Assign accountable owners and specify HITL oversight and escalation paths
  • Set explicit gate criteria for moving into Activation

Starting a pilot or building monitoring first is premature if regulatory-driven prerequisites (evidence and oversight) have not been agreed and approved.

Risk classification drives whether enhanced evidence, HITL oversight, and formal approvals must be built into the governance plan before activation.


Question 4

Topic: Values, Behaviours, and Culture

A retail bank is in the Activation stage for a GenAI assistant that drafts responses to customer complaints. The project is on a tight deadline, but Internal Audit has stated it will review the solution 6 months after go-live and expects clear evidence of who approved AI-related changes and decisions. Which action should the project team AVOID to uphold Accountability?

  • A. Define RACI decision rights and escalation for AI-related issues
  • B. Maintain a decision log for prompt/model changes and approvals
  • C. Assign a named owner for AI outputs and final sign-off
  • D. Keep AI decisions in informal chat threads to save time

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: Accountability in AIPGF requires clear ownership, decision rights, and documented approvals for AI-assisted deliverables and changes. With an upcoming audit, the team must be able to evidence who decided what, when, and on what basis. Relying on informal chats undermines auditability and blurs responsibility.

The core concept is Accountability: AI-related decisions and approvals must have a clearly identified owner and an evidence trail that supports review, assurance, and learning. In this scenario, the audit expectation makes documentation non-negotiable for items like prompt changes, model updates, risk acceptances, and go/no-go decisions.

Good accountability practices include:

  • Naming accountable roles for AI outputs and approvals (decision rights)
  • Recording decisions and approvals in a durable decision log (audit trail)
  • Defining escalation paths so issues are owned and resolved

Informal chat threads are not a controlled record and typically lack consistent approver identity, version linkage, and retention, so they weaken governance under Accountability and Transparency.

This creates no reliable audit trail or clear decision ownership for AI changes and approvals.


Question 5

Topic: Lifecycle Governance (Initiate to Operate)

A hospital is initiating a GenAI assistant to draft discharge instructions. In the Foundation stage, the team is populating the AI risk register.

For the risk “unsafe or hallucinated advice reaches patients,” the team assigns the Clinical Safety Lead to define the response, ensure controls are implemented (e.g., HITL clinician review before release), track completion evidence, and escalate if the control is not effective.

In AIPGF terms, which role responsibility does this practice most directly map to?

  • A. AI/ML Engineer
  • B. Risk Owner
  • C. Project Manager
  • D. Project Sponsor

Best answer: B

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Assigning a named individual to own a specific AI risk, define the response, ensure the HITL control is in place, and escalate when it is ineffective is the essence of risk ownership. In the Foundation stage, this makes the risk register actionable by linking each key AI risk to clear accountability and evidence of control operation.

The core governance concept here is assigning clear accountability for AI-related risks so that planned responses and controls are executed, evidenced, and escalated when needed. In AIPGF-aligned project risk management, the risk register is not just a list of risks; each significant AI risk should have a named owner with decision rights and an obligation to:

  • define the response (avoid/mitigate/transfer/accept)
  • specify controls (e.g., HITL review) and required evidence
  • monitor effectiveness and trigger escalation at gates

That set of responsibilities maps to the Risk Owner role, while other roles may support, coordinate, or approve but are not the accountable owner of the specific risk.

This is the accountability for a specific risk’s response plan and ensuring its controls are implemented and effective.


Question 6

Topic: Lifecycle Governance (Initiate to Operate)

A project team plans to use a generative AI tool to draft requirements and test cases during initiation. Before any AI use begins, the PM creates an AI Assistance Plan (intended uses, data boundaries, approvals, and HITL checkpoints) and starts a clarification-questions log and an assumptions log to make the work auditable.

What AIPGF life cycle stage intent does this practice most directly support?

  • A. Activation: execute AI-assisted work and implement controls
  • B. Evaluation: monitor performance, impacts, and incidents in operation
  • C. Operate: optimize the solution after go-live
  • D. Foundation: establish planned AI use and traceability artifacts

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Creating an AI Assistance Plan and starting supporting logs are Stage 1 (Foundation) planning activities. They set decision rights, boundaries, and HITL expectations before AI-assisted outputs are produced. The logs provide traceability for later assurance and audit needs.

Stage 1 (Foundation) is where governance is set up so AI use is intentional, controlled, and auditable before execution begins. Producing an AI Assistance Plan clarifies what AI will be used for, what data can/can’t be used, required approvals, and where humans must review or decide (HITL). Supporting logs (clarification questions and assumptions) capture what is unknown, what is being presumed, and what needs stakeholder confirmation, creating traceability that later gates can rely on. Activation focuses on doing the work under those agreed controls, and Evaluation focuses on post-build/operational monitoring and learning. The key takeaway is that these artifacts are created up front to enable controlled delivery later.

These are Stage 1 deliverables used to define how AI will be used and to capture assumptions/queries for auditability before delivery starts.


Question 7

Topic: Values, Behaviours, and Culture

A retail bank has rolled out a GenAI assistant to draft email responses for customer complaints. After 6 weeks, average handling time is down 25%, but quality checks show a small rise in inaccurate product statements and customer feedback notes a “cold” tone. Internal Audit requires evidence of ongoing oversight for AI-assisted communications. The product owner wants improvements without losing most of the efficiency gains.

What should the project team do next to best apply Continuous Improvement in line with AIPGF core values?

  • A. Disable the GenAI assistant until a full re-approval gate is completed
  • B. Maximize automation by removing human review for low-value complaints
  • C. Run a time-boxed outcomes review, update the AI Assistance Plan, and implement monitored HITL sampling
  • D. Keep current setup and remind staff they remain accountable for emails

Best answer: C

What this tests: Values, Behaviours, and Culture

Explanation: Continuous Improvement in AIPGF means reviewing real usage outcomes and updating practices, not just accepting initial performance. A time-boxed review that feeds changes into the AI Assistance Plan (prompts, guidance, training, thresholds) and adds ongoing monitored HITL sampling balances human-centric quality, transparency/auditability, and delivery efficiency. This preserves most cycle-time benefits while reducing harm from inaccuracies and tone issues.

The core concept is a closed-loop governance improvement cycle: use operational evidence to refine how AI is used and controlled. Here, efficiency gains are real, but the observed harms (inaccurate statements and tone) require proportionate, documented adjustments rather than either “business as usual” or a shutdown.

A balanced Continuous Improvement action is to:

  • Review outcomes (quality findings, customer feedback, error types, root causes)
  • Update the AI Assistance Plan (approved prompts, do/don’t rules, escalation triggers, training)
  • Implement ongoing controls (HITL sampling/QA, decision logs, trend reporting) that satisfy audit expectations

The key takeaway is to adapt practices based on measured outcomes while keeping governance evidence and human accountability in place.

It uses measured outcomes to refine prompts/training and strengthens proportionate controls with auditable monitoring while retaining benefits.


Question 8

Topic: Lifecycle Governance (Initiate to Operate)

A contact-center team is in Stage 2 (Activation) piloting a GenAI assistant that drafts refund emails. Today, supervisors report several emails included customers’ full payment card numbers. You are the project manager.

Exhibit: AI incident record + plan excerpt

INC-07 (Open)  Severity: High   Potential: PCI exposure
Scope: 6 cases confirmed; unknown total in last 24h
Immediate control available: Toggle "AI draft" OFF (no system outage)
Decision rights: Service Owner may suspend; AI Gov Lead approves rollback
Comms: Notify Risk/Compliance within 4 hours; brief Contact Center lead
Evidence: Preserve prompts/outputs + config snapshot before changes

Based on the exhibit, what is the best next governance action?

  • A. Wait for the next change advisory meeting to decide whether to rollback or suspend
  • B. Proceed directly to retraining and prompt changes, then resume once outputs look correct
  • C. Keep the pilot running while the vendor investigates root cause to avoid operational disruption
  • D. Suspend the AI drafting feature now, preserve evidence, and escalate/notify per the incident plan

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: The exhibit indicates a High-severity incident with potential PCI exposure and an immediate containment control (toggle off) that can be executed without an outage. It also specifies evidence preservation before changes and explicit decision rights and notification timelines. The best action is to contain first, then escalate and communicate according to the plan.

In Activation, incident governance prioritizes rapid containment and controlled decision-making over optimization. Here, the incident is rated High with potential PCI exposure and unknown broader scope, and the plan explicitly provides an immediate control (toggle “AI draft” off). The same excerpt requires preserving evidence (prompts/outputs and configuration snapshot) before making corrective changes, and it defines who can suspend service versus who approves rollback plus time-bound notifications.

A defensible next step is:

  • Contain: suspend AI drafting immediately using the toggle
  • Preserve evidence before any tuning or rollback
  • Escalate to the AI Governance Lead and notify Risk/Compliance within 4 hours

Corrective actions (prompt/model changes) come after containment and evidence capture, and approvals follow the stated decision rights.

This follows the defined containment, evidence preservation, escalation, and time-bound communication controls for a High-severity incident.


Question 9

Topic: Values, Behaviours, and Culture

A team is building a GenAI-enabled internal HR policy Q&A assistant (no automated decisions; answers are reviewed by HR before publishing). It is classified as Low risk and must launch in 6 weeks to replace an outdated intranet.

Exhibit: Gate checklist excerpt (last 2 cycles)

Gate: Activation go/no-go
Required approvers: HR, InfoSec, Legal, Data Protection, Comms,
  IT Ops, Enterprise AI CoE, Model Risk, Internal Audit (9 total)
Evidence submitted: AI Assistance Plan v0.6, prompt log sample,
  training data summary, HITL workflow diagram
Outcome: Deferred twice for “more detail” (no new risks logged)
Cycle time per gate review: 18 business days
Team response: Started using an unapproved chatbot “temporarily”

Based on the exhibit, what is the best next governance action?

  • A. Remove the gate entirely and let the team launch; capture documentation after release
  • B. Tailor the approval path to the low-risk tier with delegated decision rights and timeboxed reviews, while keeping the minimum evidence set
  • C. Pause the project until the quarterly enterprise model risk committee can review it in depth
  • D. Add an additional gate for culture and communications readiness before any further work continues

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: The exhibit shows clear symptoms of overly heavy governance: nine approvers, repeated deferrals without new risks, and an 18-day review cycle that is driving shadow AI use. A pragmatic AIPGF response is to right-size governance to the low-risk classification by simplifying decision rights and accelerating reviews. This maintains transparency and Human-in-the-Loop assurance while reducing friction.

This is a “too heavy” governance signal: the approval burden and review latency are disproportionate to the stated low-risk profile, and the side effect is non-compliant workarounds. In AIPGF’s pragmatic sensibility, governance should be risk-based and enable delivery, not incentivize shadow AI.

A proportionate adjustment is to:

  • reduce approvers to those with direct decision rights for this risk tier
  • timebox reviews (and define what “good enough” evidence looks like)
  • keep essential artifacts for auditability (e.g., AI Assistance Plan, decision log, HITL workflow)

The key takeaway is to tailor controls to context so transparency and accountability are preserved while unnecessary friction is removed.

The exhibit shows governance is too heavy for a low-risk use case (long cycle time, repeated deferrals, shadow use), so right-sizing controls restores speed without removing accountability.


Question 10

Topic: AI in Projects and Organizations (Context)

A project team is piloting a GenAI assistant for customer service summaries. After UAT, the team finds the assistant occasionally invents policy details, creating potential customer harm. The Product Owner wants to proceed to limited go-live and “fix it in the next sprint.” Under the AIPGF decision-rights approach, which role should be the escalation point with authority to approve a risk acceptance or delay go-live?

  • A. Product Owner
  • B. Technical Lead for the GenAI configuration
  • C. AI governance decision forum (e.g., AI Governance Board/Steering Group)
  • D. Project Manager

Best answer: C

What this tests: AI in Projects and Organizations (Context)

Explanation: Because the issue involves potential customer harm, the decision is not a routine delivery tradeoff. AIPGF expects clear decision rights: the delivery team identifies and documents the risk, then escalates it to the designated AI governance authority to decide risk acceptance, controls, or a go/no-go. This supports transparency, accountability, and auditable governance decisions.

Decision authority in AIPGF is based on impact and risk, not just day-to-day delivery ownership. When an AI-assisted product shows a credible harm mode (e.g., hallucinated policy details), moving forward requires an explicit governance decision: accept the risk with compensating controls, delay release, or change the scope.

In practice, the Project Manager and Product Owner should:

  • record the issue in the risk/issue log and decision log
  • propose options and mitigations
  • escalate the decision to the organization’s designated AI governance decision forum (often aligned with the Project Board/steering group decision rights)

This ensures the accountable body makes the risk-acceptance or go/no-go call with appropriate assurance input and traceability.

High-impact risk acceptance and go/no-go decisions should be escalated to the designated AI governance authority, not decided within the delivery team.


Question 11

Topic: Principles for Responsible and Trustworthy AI

A retail bank wants to use a GenAI assistant during the Activation stage to help underwriters write credit-decline rationales. The assistant would see customer application details and propose a rationale paragraph.

A recent internal audit finding requires that, for every decline, the bank can reconstruct who decided what, using which inputs and model version, and show the human approver’s rationale for accepting or editing AI output.

Which AIPGF governance action best balances ethical, efficient, and effective use given this constraint?

  • A. Select the fastest vendor tool, even if it lacks detailed logs
  • B. Permit use only after removing all personal identifiers
  • C. Mandate an AI Assistance Plan and decision log with HITL sign-off
  • D. Run a bias test and proceed if fairness metrics meet targets

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The dominant constraint is strict auditability: the bank must be able to reproduce the decision path and demonstrate accountable human approval. An AI Assistance Plan plus an auditable decision log (including model/version, prompts/inputs, outputs, and human edits/approval) enables transparent, defensible use while still allowing efficiency gains from drafting support.

When auditability is non-negotiable, the best tradeoff is to keep GenAI use within a controlled, reconstructable process rather than optimizing only speed or model performance. In AIPGF terms, this operationalizes Transparency and Human-Centricity by making AI assistance visible, reviewable, and attributable to an accountable human decision-maker.

A practical control set is:

  • Define permitted use in an AI Assistance Plan (scope, data, limits).
  • Capture provenance in a decision log (inputs/prompt, output, model/version).
  • Require HITL approval documenting acceptance/edits and rationale.

This meets the audit finding while still enabling effective drafting support; other controls (privacy or bias testing) may still be needed, but they do not satisfy the primary auditability requirement on their own.

Strict auditability is the key discriminator, so traceability evidence (provenance + human accountability) must be built into the workflow before use.


Question 12

Topic: Framework Structure and Controls

A product team is delivering a GenAI-assisted claims correspondence solution for a regulated insurer. To move fast, the project manager tells the team: “Use GenAI wherever it helps,” but does not define what AI is allowed to do (e.g., draft vs approve), where it may be used (internal notes vs customer letters), or who is accountable for AI-assisted outputs. At the next governance checkpoint, Internal Audit asks for the agreed scope of AI assistance.

What is the most likely near-term impact of this omission?

  • A. Regulators will impose penalties for unfair outcomes in customer letters
  • B. The model will drift in production, degrading accuracy over time
  • C. The program will miss most benefits because staff will resist adoption
  • D. The checkpoint is delayed due to weak auditability of AI use

Best answer: D

What this tests: Framework Structure and Controls

Explanation: Not setting governance objectives and scope for AI assistance immediately weakens transparency and auditability: the project cannot show what AI was permitted to do, where it was used, or who signed off AI-assisted deliverables. In a regulated context, that typically blocks assurance at a gate even before broader performance or outcome issues emerge.

AIPGF expects an explicit definition of AI assistance objectives and scope so decision rights, controls, and evidence can be applied consistently. In this scenario, the team used GenAI broadly without stating boundaries (what tasks, which artifacts, and what level of autonomy) and without assigning accountability for AI-assisted outputs. When Internal Audit asks for the agreed scope, the project has little defensible evidence, so the near-term consequence is a governance delay or pause while the AI Assistance Plan (or equivalent) is created and approvals are obtained.

Practical scope elements include:

  • Allowed tasks (e.g., drafting, summarising) vs prohibited tasks (e.g., final approval)
  • Where AI may be used (internal only vs customer-facing)
  • Required human review (HITL) and sign-off roles
  • Logging requirements for traceability

The key takeaway is that unclear AI scope primarily hurts near-term trust and audit readiness, not long-term model behaviour or downstream enforcement actions.

Without explicit AI assistance scope and accountabilities, the team cannot evidence controlled, approved use, so assurance sign-off is likely to stall.


Question 13

Topic: Foundations of AI Project Governance (AIPGF)

A retail bank is running a 10-week project to pilot a GenAI assistant that drafts responses for call-center agents (agents must approve before sending). The output is customer-facing and the bank’s enterprise AI policy requires: (1) registering AI use cases in a central inventory, (2) completing a risk tiering assessment, and (3) obtaining enterprise AI governance approval for high-risk deployments (the enterprise board meets monthly). The project already has standard project governance (steering committee, RAID log, change control).

Which approach best balances ethical, efficient, and effective AI use with required governance controls?

  • A. Have the delivery team self-attest responsible use and proceed, documenting decisions after go-live
  • B. Submit all design and prompt decisions to the enterprise AI board before any build work starts
  • C. Run project-level AI governance (AI Assistance Plan, HITL controls, testing evidence, decision logging) while registering and tiering the use case immediately and escalating only required approvals to enterprise governance
  • D. Treat it as standard project governance only; rely on existing RAID and change control

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The best balance is to apply AI-specific controls at the project level (so the team can build safely and quickly) while interfacing with enterprise AI governance for enterprise-wide requirements like inventory registration, risk tiering, and mandated approvals. This distinguishes project AI governance (how this project uses AI responsibly) from enterprise governance (portfolio-wide policy and decision rights) and from general project governance (time/cost/scope control).

Project-level AI governance focuses on how the team will use AI safely and effectively in this specific delivery: defining intended use, setting HITL boundaries, capturing AI-related decisions, and producing test/monitoring evidence for go/no-go gates. Enterprise AI governance sets organization-wide controls such as mandatory inventory registration, risk tiering, standard patterns, and escalation/approval for higher-risk deployments. General project governance still runs delivery cadence (steering committee, RAID, change control) but does not replace AI-specific assurance.

A balanced approach here is to:

  • Register the use case and complete risk tiering early to confirm required enterprise approvals.
  • Implement project AI artifacts and controls (AI Assistance Plan, HITL workflow, validation and monitoring plan, decision logs).
  • Use enterprise governance only for required approval gates, keeping routine design decisions within the project’s delegated authority.

This avoids both under-governing (trust and harm risks) and over-centralizing (unnecessary delays).

This keeps day-to-day AI controls and evidence within the project while meeting enterprise requirements for inventory, risk tiering, and only the mandated approval gate.


Question 14

Topic: Foundations of AI Project Governance (AIPGF)

Your organization has adopted high-level AI standards (e.g., “follow recognized AI regulations and be transparent and fair”), but a project team is building a GenAI assistant to draft customer-service emails.

Constraints:

  • Stage: Foundation, moving to Activation in 2 weeks
  • Risk tier: Medium (customer-facing content), HITL required for all outbound messages
  • Internal audit in 6 weeks requires objective evidence of governance controls
  • The policy is principles-based and does not specify project artifacts, owners, or tests

What is the BEST next governance action to close the gap between the standards and day-to-day execution?

  • A. Create a tailored control-and-evidence plan with owners and gates
  • B. Share the standards and let the team self-attest compliance
  • C. Request legal sign-off on prompt templates before development starts
  • D. Proceed to build and address compliance during Evaluation

Best answer: A

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The gap is that standards describe what “good” looks like, but projects need explicit, testable controls, decision rights, and evidence to execute and demonstrate compliance. In Foundation, the best next action is to translate the standards into a tailored set of controls and artifacts (e.g., AI Assistance Plan, RACI, logs, gate criteria) aligned to HITL and audit needs before Activation begins.

High-level standards and regulations are typically principles-based (what outcomes are required), while day-to-day delivery needs operational mechanisms (how the team will implement, verify, and evidence those outcomes). In this scenario, the project must be ready for Activation with HITL and must produce auditable evidence within 6 weeks, so Foundation work should convert “transparent/fair/responsible” into concrete governance controls with clear accountability.

A practical way to close the gap is to define and baseline a tailored control-and-evidence set, such as:

  • Map each policy/standard expectation to a project control and owner (RACI/decision rights)
  • Define go/no-go criteria for the Activation gate (including HITL workflow)
  • Specify required evidence (AI Assistance Plan, decision log, risk log, test/monitoring records)

Deferring this translation or relying on self-attestation leaves the team unable to execute consistently or satisfy auditability.

It operationalizes principles-based standards into assigned controls, required evidence, and stage gates that can be executed and audited.


Question 15

Topic: Foundations of AI Project Governance (AIPGF)

A project team is delivering a GenAI assistant to help account managers write client update emails. After a 2-week pilot, the sponsor wants the assistant to auto-send emails with no human review to “save time.”

Exhibit: AI Assistance Plan excerpt (v0.9)

Use case (current): Draft emails; human edits before sending
Planned evolution: Auto-send after pilot if “quality is OK”
Risk tier (Foundation): Medium (approved)
HITL control: Mandatory reviewer during pilot only
Transparency: No disclosure to recipients planned
Decision note: “Controls agreed at Foundation; no re-approval expected”

Which next governance action is best supported by the exhibit, given AIPGF expectations about decisions made once vs revisited iteratively as usage evolves?

  • A. Escalate to procurement to renegotiate the model contract before any change
  • B. Approve auto-send now and rely on incident monitoring to adjust later
  • C. Keep Foundation decisions fixed; only iterate prompts during the pilot
  • D. Reassess risk tier and controls before auto-send; update the AI Assistance Plan

Best answer: D

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Shifting from human-edited drafting to auto-sending is a material change in how AI outputs are used, so key governance decisions cannot be treated as “set once.” Under AIPGF, the project should revisit and re-approve the risk tier and the associated control set (including HITL and transparency) when usage expands. Updating the AI Assistance Plan and decision log creates auditability and a defensible go/no-go.

The core idea is to baseline some governance decisions at Foundation, but to revisit other decisions iteratively when the AI usage pattern, audience, or decision impact changes. In the exhibit, the planned evolution to auto-send increases autonomy and stakeholder impact, making the “no re-approval expected” note inappropriate.

A practical AIPGF approach is:

  • Keep stable, “once-per-project” decisions as the baseline (e.g., core roles/decision rights, governance cadence).
  • Revisit “usage-sensitive” decisions when scope evolves (e.g., risk tier and assurance level, HITL intensity, transparency/disclosure to affected people, go/no-go criteria and monitoring signals).

Here, the project should treat auto-send as a new governance checkpoint in Activation/Evaluation, updating the AI Assistance Plan and recording refreshed approvals before changing operational use.

Moving from assisted drafting to autonomous sending changes usage and requires iterative revalidation of risk tier, HITL, and transparency decisions before go/no-go.


Question 16

Topic: Roles, Responsibilities, and Accountabilities

A national insurer is piloting a GenAI assistant to draft outbound claim decision letters for adjusters. The model is provided by a vendor, but the insurer must meet strict auditability expectations (regulators may request who approved release, on what evidence, and when). You are building a RACI for the Activation governance gate: “Go/no-go to release to production with HITL.”

Which responsibility assignment is most appropriate for this gate decision and its auditable record?

  • A. A: Internal Audit; R: Legal/Compliance; C: AI project manager and vendor; I: Business owner
  • B. A: AI project manager; R: vendor account lead; C: Business owner; I: Legal/Compliance and Internal Audit
  • C. A: AI Governance Board chair; R: AI project manager; C: Legal/Compliance, Data Protection, InfoSec; I: Internal Audit
  • D. A: Business owner; R: front-line adjusters; C: AI Governance Board chair; I: Data Protection and InfoSec

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: For a strictly auditable go/no-go gate, AIPGF expects clear decision rights and an evidence-backed approval record owned by a governance authority. Delivery roles should be responsible for assembling the gate pack and updating the decision log, while independent control functions are consulted to strengthen assurance. Internal Audit is typically informed so it can later provide independent assurance without owning the release decision.

The core need is strict auditability at an Activation gate, which requires unambiguous decision rights and a traceable approval trail (who approved, based on what evidence, when). In AIPGF terms, the go/no-go should be accountable to a defined governance authority (e.g., AI Governance Board chair or delegated gate owner), while the delivery function is responsible for preparing the evidence pack (AI Assistance Plan, risk log updates, HITL procedures, testing results) and recording the decision in the decision log.

Control functions (Legal/Compliance, Data Protection, InfoSec) are best positioned as consulted to provide challenge and assurance inputs prior to approval. Internal Audit should generally be informed rather than owning the gate, preserving its independence as third-line assurance.

The closest trap is making the delivery lead or vendor accountable, which weakens independence and defensibility under audit.

It places decision accountability with a defined governance authority while making delivery compile evidence, consulting control functions, and informing (not performing) audit.


Question 17

Topic: Principles for Responsible and Trustworthy AI

A retail bank pilots a GenAI assistant to draft credit-limit increase decisions and customer emails. To hit a launch deadline, the project lead removes the required human sign-off step and tells the team to “treat the AI output as the decision unless someone objects.” No role is named as accountable for the final decision, and no decision log is maintained.

Under the Human-Centricity principle, what is the most likely near-term impact of this governance omission?

  • A. Immediate elimination of operational risk because automation removes human error
  • B. Higher long-term model drift because monitoring thresholds were not defined
  • C. Reduced auditability because accountability for decisions cannot be evidenced
  • D. Delayed value realization because a formal retraining program is required first

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Human-Centricity requires humans remain accountable for AI-assisted deliverables and decisions. Removing human sign-off, failing to assign an accountable decision owner, and not keeping decision records breaks traceability. The most immediate consequence is weakened auditability and reduced ability to respond to challenges, complaints, or assurance requests with clear decision ownership.

Human-Centricity in AIPGF keeps decision accountability with humans even when AI supports the work. In the scenario, the project effectively transfers decision-making to the GenAI assistant by default (“AI output as the decision”) and removes evidence of human ownership by not naming an accountable role or maintaining a decision log.

Near-term, this creates an assurance gap: when an outcome is challenged (by a customer, risk, compliance, or internal audit), the team cannot demonstrate who approved the decision, what inputs were used, or what rationale was applied. That undermines trust and increases exposure because the organization cannot defend or quickly remediate specific decisions. The closest distractors describe issues that are plausible but not the most direct or immediate consequence of missing human accountability.

Without a named accountable human and decision records, the bank cannot show who made/approved outcomes, undermining near-term assurance and challenge handling.


Question 18

Topic: Framework Structure and Controls

During Activation, a bank project uses GenAI to draft customer-facing hardship policy letters. The team did not implement traceability controls: no prompt/response logging, no recording of external sources used, no versioning of AI-generated drafts, and no decision rationale for human edits/approvals.

Two weeks before release, Internal Audit asks for evidence showing how the final letter content was produced and approved to support the next governance gate.

What is the most likely near-term impact of this omission?

  • A. Regulators will likely impose fines after a future customer complaint investigation
  • B. Stakeholder trust will increase because fewer artifacts reduce delivery overhead
  • C. The gate will likely be delayed due to inability to evidence provenance and approvals
  • D. The solution’s production accuracy will quickly degrade due to unmonitored model drift

Best answer: C

What this tests: Framework Structure and Controls

Explanation: Traceability controls are what make AI-assisted work auditable: they show what the AI produced, what sources were used, which versions were approved, and why humans accepted changes. With an audit request two weeks before release, the immediate consequence is a failure to provide objective evidence for the governance gate. That typically triggers gate delay and rework to reconstruct the record.

In AIPGF, traceability controls (prompt/response logs, source recording, versioning, and decision rationale) provide auditability and support Transparency by making AI assistance inspectable and defensible. In this scenario, the omission is discovered just before a governance gate, and Internal Audit requests evidence of provenance and approvals for customer-facing content.

Near-term, the team cannot reliably answer:

  • What prompts and constraints produced the drafts
  • What sources influenced wording and whether they were approved
  • Which draft/version was reviewed and signed off
  • Why edits/approvals were made (decision rationale)

That evidence gap increases risk exposure immediately by blocking assurance activities, forcing rework to recreate documentation, and likely delaying the go/no-go decision. Longer-term consequences may occur, but the immediate impact is loss of auditability at the gate.

Without prompt/source/version/rationale records, the team cannot demonstrate how content was generated and validated, so audit support for the gate fails in the near term.


Question 19

Topic: Assurance, Metrics, and Continuous Improvement

You have joined an AIPGF continuous-improvement workstream for a GenAI-enabled claims triage service. The team says, “We’ve made big governance improvements since last quarter,” and asks you to report progress to the assurance committee. You are given a high-level improvement roadmap but no prior assessment pack or evidence set.

What should you verify or request FIRST so progress can be tracked credibly using repeat assessments and evidence?

  • A. The model’s architecture and hyperparameters used in triage
  • B. The baseline assessment results, scoring criteria, and the evidence sources to be re-tested
  • C. A statement from the product owner that governance has improved
  • D. A list of planned new GenAI features for the next release

Best answer: B

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: Tracking improvement requires a repeatable measurement approach: a baseline, consistent assessment criteria, and verifiable evidence that can be sampled again later. Requesting the prior assessment pack and defining the evidence set enables an apples-to-apples comparison over time. This turns a narrative claim of “improvement” into auditable progress reporting.

The core concept is evidence-based continuous improvement: progress is demonstrated by comparing repeat assessments performed the same way, against the same criteria, using documented evidence. In this scenario, you have a roadmap but lack the measurement backbone needed to show movement from “then” to “now.”

Before reporting to an assurance committee, establish the minimum comparable assessment set:

  • The baseline assessment results (scores/ratings and findings)
  • The assessment method and scoring criteria used (so it can be repeated)
  • The evidence sources/artefacts sampled (so changes can be verified)

Once these are in place, later assessments can show whether roadmap actions produced measurable, evidenced improvements rather than subjective statements or technical detail that doesn’t prove governance outcomes.

Without a defined baseline, consistent assessment method, and agreed evidence sources, you cannot validly compare results across repeat assessments.


Question 20

Topic: Values, Behaviours, and Culture

A customer-support division is piloting a GenAI email-drafting assistant. To maintain momentum, the sponsor wants to expand from 40 to 400 users next month.

However, the change lead reports resistance: some agents are skipping the required “human review before send” step to save time. The project must demonstrate that the control is working in practice before scaling.

Which artifact/evidence best validates readiness to scale while preserving controls?

  • A. Risk register listing adoption and non-compliance risks with owners
  • B. Benefits Tracker showing reduced handle time from the pilot
  • C. RACI confirming accountability for agents, team leads, and compliance
  • D. Decision log showing reviewer sign-off rates and exception handling

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: Readiness to scale depends on evidence that the required HITL control is effective in day-to-day operation, not just planned. A decision log that captures actual reviewer sign-offs, overrides, and how exceptions were handled supports transparency, auditability, and a confident go/no-go decision. This preserves controls while enabling a faster, defensible rollout.

In AIPGF-aligned change management, maintaining momentum does not mean relaxing controls; it means scaling only when you can demonstrate that controls work in real workflows. When there is resistance leading to control-bypass, the most convincing readiness evidence is an operational record that shows adherence, deviations, and remediation.

A fit-for-purpose decision log should capture, at minimum:

  • Whether the human review step occurred (HITL evidence)
  • Who approved/overrode and why (decision rights and transparency)
  • Exception patterns and corrective actions (control effectiveness)

Benefits measures, role maps, and risk lists are useful, but they do not validate that people are actually using the control correctly under time pressure. The key takeaway is to scale based on observed control performance, not intentions or outcomes alone.

It provides auditable evidence that the HITL review control is being followed (or corrected) under real use, supporting a controlled scale-up.


Question 21

Topic: Roles, Responsibilities, and Accountabilities

A team in the Activation stage rolls out a GenAI assistant to draft customer-service replies. Three weeks later, an internal review finds: no documented HITL sign-off for when agents can send AI-drafted replies; agents in two regions used personal AI tools “to keep up”; decision logs and evidence for AI-usage decisions are inconsistent; measured handling-time benefits are below target; and the workers council is pushing back, saying “this change was never properly explained.”

What is the most likely underlying cause?

  • A. Stakeholder mapping and communications were not tailored to AI decision changes
  • B. The project lacks enough documentation and evidence artifacts
  • C. The model’s response quality is too inconsistent for production use
  • D. The team failed to achieve the expected benefits in the pilot

Best answer: A

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The pattern points to a communications and engagement breakdown: unclear, unevenly communicated AI usage decisions and change impacts. Without bringing the right stakeholders (frontline leaders, assurance functions, workforce reps) into planned meetings and communications, HITL approvals and approved-tool guidance are missed, and adoption fragments into shadow AI use. That also undermines evidence consistency and benefits realization.

In AIPGF, AI usage decisions (what is allowed, who approves, what HITL checks apply, and how changes are communicated) need deliberate stakeholder engagement. Here, multiple regions acted differently, HITL sign-off is missing, and a workforce body says the change was not explained—these are strong clues that the engagement approach did not identify and actively manage key stakeholder groups and decision owners.

A fit-for-purpose approach would include:

  • Stakeholder analysis covering sponsors, frontline users/supervisors, Compliance/InfoSec, and workforce representatives
  • Planned governance touchpoints at AI decision gates (approve AI Assistance Plan, HITL controls, approved tools)
  • Clear, repeated communications on “approved vs prohibited” usage and how evidence (decision logs) is captured

The core issue is not “more paperwork,” but missing alignment and communication pathways that make correct behavior the easiest behavior.

Key affected groups and decision-right holders were not engaged with clear, timely communications on approved AI use and HITL approvals, driving shadow usage and missed sign-offs.


Question 22

Topic: Assurance, Metrics, and Continuous Improvement

A retail bank has completed an AIPG-CMM maturity assessment for a GenAI “agent-assist” tool used by call-centre staff to draft customer responses. The assessment shows strong documentation in Foundation/Activation, but weak continuous improvement practices in Evaluation.

AIPG-CMM highlights (excerpt)
- Monitoring of AI outputs: ad hoc, not role-owned
- Incident capture/triage: informal, no thresholds
- Benefits tracking: defined metrics, inconsistent review cadence

The sponsor asks you to propose the next improvement actions for the next quarter. Before you select specific actions, what should you ask/verify FIRST?

  • A. What AI risk tier applies, and which operational decisions the tool is allowed to influence
  • B. What budget has already been approved for new monitoring tools
  • C. Whether the training data can be moved to a different cloud region
  • D. Which large language model architecture the vendor uses and why

Best answer: A

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: A maturity assessment tells you where capability is weak, but not how much governance is warranted. Verifying the use case’s risk tier and the decision scope the GenAI output can influence lets you size the next-step improvements (e.g., monitoring ownership, incident thresholds, escalation paths) appropriately and defensibly.

Next-step improvement actions from an AIPG-CMM assessment should be tailored to the context, especially the risk tier and the decision authority/scope of the AI assistance. In the scenario, Evaluation practices are weak (ad hoc monitoring, informal incident handling), but the required improvement level depends on how consequential the AI-assisted outcomes are.

Ask first for the information that will shape the improvement plan’s “how much” and “how fast,” such as:

  • the risk tier for this specific use case
  • what decisions humans may take based on the AI output (and any prohibited uses)
  • who has decision rights for go/no-go and for accepting residual risk

Once that is clear, you can define proportionate actions (named monitoring owner, thresholds, incident workflow, review cadence, and evidence) that match the assessed gaps. The key takeaway is that maturity gaps plus risk context drive the right improvement backlog.

Risk tier and decision scope determine the proportional Evaluation-stage improvements (monitoring, thresholds, escalation, and approvals) needed from the maturity gaps.


Question 23

Topic: Roles, Responsibilities, and Accountabilities

A bank is starting an AI-assisted project that drafts credit-limit change recommendations for call-center agents using a third-party GenAI API. The project is in Foundation and must pass an Activation gate in 2 weeks.

Constraints:

  • Risk tier: high (customer-impacting decisions); HITL required for any final recommendation
  • Internal audit will review governance evidence in 6 weeks (clear decision rights and approvals required)
  • Cross-functional team is new to AI governance; roles are currently informal

What is the BEST next governance action to support stage-appropriate accountability across the AIPGF life cycle?

  • A. Assign all AI governance tasks to the product owner to keep delivery fast
  • B. Facilitate a workshop to build and approve a RACI spanning Foundation, Activation, and Evaluation governance activities
  • C. Let the GenAI vendor own approvals and provide their standard responsibility model
  • D. Proceed to Activation and document responsibilities after the pilot proves value

Best answer: B

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: In Foundation, the project must establish clear accountability and decision rights before moving to Activation, especially for a high-risk, customer-impacting, HITL use case. Building a RACI across Foundation, Activation, and Evaluation ensures the right roles are responsible, accountable, consulted, and informed for key governance activities and gate approvals. This also produces audit-ready evidence within the stated timelines.

The core concept is creating a responsibility matrix (RACI) that assigns decision rights and accountability for AI governance activities across the AIPGF life cycle. Because the project is high risk, requires HITL, and faces an imminent Activation gate plus an audit review, informal role agreements are insufficient.

A practical next step is to define and obtain sign-off on a RACI that covers, at minimum:

  • Foundation: AI Assistance Plan ownership/approval, risk and control ownership, data access approvals
  • Activation: go/no-go decision rights, HITL operating responsibilities, change control for prompts/models
  • Evaluation: monitoring ownership, incident escalation, periodic reviews and benefits tracking

This aligns with Human-Centricity and Transparency by making accountabilities explicit and auditable; it also supports Adaptability by clarifying who can approve changes as the solution evolves.

A signed-off, stage-spanning RACI clarifies decision rights and audit-ready accountabilities before the Activation gate in a high-risk HITL use case.


Question 24

Topic: AI in Projects and Organizations (Context)

A HR shared-services program is using a GenAI assistant to draft redundancy letters. The project is in Activation and plans to scale from a 20-case pilot to 2,000 employees next month. During pilot QA, 3/20 drafts included incorrect protected-attribute inferences and 2/20 cited the wrong policy clause; the team cannot explain why (no prompt/version logs), and Legal requires an auditable rationale for each decision.

What should the project lead do next?

  • A. Pause scale-up and redesign with HITL and evidence before proceeding
  • B. Proceed to scale but add monitoring and incident response after go-live
  • C. Proceed to scale with stronger disclaimers and user training
  • D. Cancel the GenAI approach permanently and return to manual drafting

Best answer: A

What this tests: AI in Projects and Organizations (Context)

Explanation: Because the pilot shows material harmful errors and the team cannot provide traceable evidence (no logs) to support decisions, scaling would violate governance expectations and undermine Transparency and Human-Centricity. The balanced action is to halt scale-up, redesign the AI-assisted approach with HITL controls, and pass an Evaluation go/no-go using documented assurance evidence before resuming.

This scenario signals unacceptable risk and lack of assurance: harmful content was produced, root cause cannot be established, and auditability requirements are unmet. Under AIPGF, the right response is to stop the planned scale-up and move into an Evaluation-style assurance package before expanding use.

A proportionate redesign typically includes:

  • Update the AI Assistance Plan (scope, prohibited outputs, escalation)
  • Implement HITL review/approval for all letters and decision rationales
  • Add decision logs (prompts, versions, sources) and a risk log
  • Define go/no-go criteria and re-run a controlled pilot to evidence quality

This preserves value while restoring trust and control, rather than optimizing speed at the expense of governance.

The current lack of assurance and auditability, plus observed harmful errors, requires a stop/redesign and an Evaluation gate with logged, reviewable evidence before scaling.


Question 25

Topic: AI in Projects and Organizations (Context)

A project team piloted a GenAI assistant to draft internal HR guidance. In testing, reviewers found (1) fabricated policy citations (hallucinations), (2) inconsistent tone across regions that could disadvantage non-native writers (bias), (3) missed requirements when prompts lacked context (context gaps), and (4) different outputs after the vendor rolled a silent model update (model change). The sponsor wants to move from pilot to broader rollout next month and asks for a single artifact that best validates readiness and control effectiveness for these uncertainty/error sources. Which artifact is most appropriate?

  • A. A RACI matrix showing who is accountable for AI-assisted HR guidance outputs
  • B. A risk register listing the four risks with owners and qualitative ratings
  • C. A Benefits Tracker showing time saved and satisfaction scores from the pilot
  • D. An AI Assistance Plan including HITL verification steps, bias checks, prompt/context requirements, and model version/change controls

Best answer: D

What this tests: AI in Projects and Organizations (Context)

Explanation: The strongest readiness evidence is the artifact that demonstrates concrete controls mapped to the known AI uncertainty/error modes. An AI Assistance Plan can show how outputs will be validated (HITL), how context will be captured, how bias will be checked, and how model updates will be governed and monitored before broad rollout.

This scenario contains multiple common AI uncertainty/error sources: hallucinations (fabricated citations), bias (systematic output differences affecting groups), context gaps (missing constraints in prompts), and model changes (output drift after an update). Governance mitigates these by defining repeatable controls and decision rights for AI use.

An AI Assistance Plan is the best validation artifact because it can provide implementable evidence that controls exist and are ready to operate, such as:

  • Minimum context/prompt requirements and a clarification workflow
  • HITL review and fact-checking against authoritative sources
  • Bias/quality checks and acceptance criteria for release
  • Model/version pinning, change notification, regression testing, and rollback triggers

By contrast, artifacts that only measure benefits, assign roles, or list risks do not validate that the controls will actually prevent/detect issues at rollout.

It directly evidences the specific controls that mitigate hallucination, bias, context gaps, and model drift and how they will be executed and monitored at rollout.

Questions 26-40

Question 26

Topic: Assurance, Metrics, and Continuous Improvement

A shared-services team supports 30 AI-assisted projects using the same GenAI writing tool. The vendor releases a new feature that can pull in external web content, and a new internal policy is issued on using third-party data sources.

The AI governance lead introduces a “governance trigger” process: whenever a tool adds material capabilities or rules change, each project must update its AI Assistance Plan and risk controls, and re-confirm go/no-go criteria before continued use.

Which AIPGF concept is this practice primarily demonstrating?

  • A. Principle: Adaptability
  • B. Life cycle stage intent: Activation
  • C. Principle: Human-Centricity
  • D. Principle: Transparency

Best answer: A

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: The described trigger-based review ensures governance stays fit-for-purpose as AI capabilities, risks, and policy requirements change. That is the essence of the AIPGF Adaptability principle: monitoring for change and rapidly tailoring controls, evidence, and approval conditions to the new context. It is especially important when scaling across many projects using shared tools.

In AIPGF, Adaptability is about keeping governance proportionate and current as AI tools, features, risks, and regulatory or policy expectations evolve. In the scenario, the governance lead establishes a repeatable mechanism (“governance triggers”) that forces projects to refresh key artifacts (like the AI Assistance Plan), reassess risks and controls, and revalidate go/no-go criteria when a material change occurs (new web access capability; new policy). This is continuous improvement at scale: systematic detection of change plus a defined response that updates evidence and decision rights.

Key takeaway: Adaptability is not a one-time setup; it is an ongoing, trigger-driven recalibration of governance as conditions change.

It institutionalizes change-triggered updates to governance artifacts and controls as tools and rules evolve.


Question 27

Topic: Foundations of AI Project Governance (AIPGF)

A project team in a regulated retail bank is using GenAI to draft customer service emails. Delivery is slipping, and the team complains that “governance is just bureaucracy.” You review their current control approach.

Exhibit: Team artifact excerpt

GenAI Governance Checklist v0.3 (team-created)
- 12 separate sign-offs required before any prompt use
- No `AI Assistance Plan` captured; “too slow”
- No owner for final email content; “model output accepted”
- No `decision log` / traceability for changes
- KPI tracked: “approvals completed” only

What is the best next governance action supported by the exhibit?

  • A. Pause GenAI use until the enterprise AI policy is finalized
  • B. Keep the checklist but add weekly audits of all prompts
  • C. Replace with risk-based gate, named accountable owner, minimal artifacts
  • D. Increase sign-offs and require steering approval for each prompt

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The exhibit shows governance optimized for activity (sign-offs) rather than outcomes (accountability, control, traceability). The most effective response is to tailor governance so it is risk-based and lightweight, while still producing core assurance evidence. This keeps delivery moving and improves trust in the AI-assisted outputs.

Structured governance is not “more steps”; it is the minimum set of controls and evidence needed to enable safe, accountable delivery. Here, the checklist creates delay (12 sign-offs) yet fails key outcomes: no accountable human owner for customer communications, no AI Assistance Plan defining permitted use and constraints, and no decision log to support auditability and learning.

A better, scalable approach is to:

  • Assign clear accountability for final email content (HITL decision rights).
  • Use a risk-tiered gate with explicit go/no-go criteria.
  • Require only essential artifacts (e.g., AI Assistance Plan, decision log, risk log).

This replaces bureaucracy (tracking approvals) with evidence of controlled AI use, trust, and accountability.

It removes approval-heavy bureaucracy while restoring outcomes-focused controls: accountable human ownership, controlled AI use, and traceability.


Question 28

Topic: AI in Projects and Organizations (Context)

A project team wants to use a GenAI tool to draft new contract clauses by referencing a library of past supplier contracts. Legal raises concerns about copyright/IP reuse and the need to retain AI prompts/outputs as project records. The PM is asked to produce an AI Assistance Plan that confirms content/licensing checks, approved data sources, and a records-retention approach before anyone uses GenAI on the work.

In the AIPGF, this practice most directly maps to the intent of which life cycle stage?

  • A. Foundation stage
  • B. Evaluation stage
  • C. Transparency principle
  • D. Activation stage

Best answer: A

What this tests: AI in Projects and Organizations (Context)

Explanation: Creating an AI Assistance Plan to confirm lawful data/content use and records retention is a front-loaded governance control. It is used to set constraints, decision rights, and evidence expectations before AI-assisted work starts, reducing privacy/IP/retention exposure. That aligns to the Foundation stage intent in the AIPGF life cycle.

The core concept is placing legal/compliance controls at the right governance point. In AIPGF, the Foundation stage is where the project defines the conditions for safe, compliant AI use before Activation work proceeds. For legal/compliance challenges like IP/copyright reuse and records retention, the governance response is to document and approve:

  • what sources are permitted (and licensed)
  • how prompts/outputs will be captured as records
  • who reviews/approves and what evidence is required

Activation is where the team executes against those controls (e.g., running reviews and maintaining logs), and Evaluation is where outcomes and operational monitoring are assessed. Transparency is a guiding principle but not the life cycle intent being tested here; the scenario is about establishing pre-use compliance guardrails and auditability.

It establishes upfront legal/compliance guardrails (IP and records retention) and approval conditions before AI use begins.


Question 29

Topic: Principles for Responsible and Trustworthy AI

A retail bank wants to use a GenAI assistant to draft credit limit recommendations for small-business customers. The recommendations would be used in regulated lending decisions (high-impact), and an internal audit is scheduled in 3 months. The product owner is pushing for an 8-week MVP to meet a competitor launch.

As the AIPGF practitioner, what approach best applies the Adaptability principle by increasing governance rigor appropriately while still delivering value quickly?

  • A. Use a risk-tiered release with stricter gates, independent review, and HITL approval
  • B. Pause delivery until the full production assurance pack is complete for all use cases
  • C. Rely on vendor certifications and focus on speed; monitor issues post-launch
  • D. Run a limited MVP with disclaimers, then add governance controls after adoption

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: In a high-impact, regulated use case, Adaptability means scaling governance up, not bypassing it. A risk-tiered, phased release can meet an 8-week target while adding stricter gates, independent review, and audit-ready evidence (HITL decision controls, logs, and monitoring). This balances ethical use, effectiveness, and delivery speed under the stated audit and regulatory constraints.

The LO is about tailoring governance rigor to context: regulated, high-impact lending decisions require more evidence, tighter go/no-go gates, and independent assurance than low-risk experimentation. Applying Adaptability here means structuring delivery so value can be realized quickly without reducing necessary safeguards.

A balanced approach is to:

  • Define the use as high-impact and set stricter gate criteria
  • Require independent review (e.g., model risk/assurance) before any operational use
  • Keep a human-in-the-loop as the accountable decision-maker with clear decision rights
  • Produce audit-ready evidence (AI Assistance Plan, decision logs, risk log, monitoring plan) as part of the release gates

This enables controlled activation (e.g., limited scope/user group) while meeting audit expectations, rather than shipping first and “fixing governance later” or stopping all progress.

In a regulated, high-impact context it adds evidence and independent assurance at gated points while allowing controlled value delivery via phased release with HITL and audit-ready logging.


Question 30

Topic: Assurance, Metrics, and Continuous Improvement

A retail bank is preparing to scale a GenAI call-summarisation service from one pilot team to multiple business units. Internal audit asks for an AIPG-CMM maturity baseline.

In the draft self-assessment, four level descriptions are provided below. Which description is INCORRECT?

  • A. Ad Hoc: practices are informal and depend on individuals
  • B. Initialised: repeatable practices exist in pockets with basic roles
  • C. Standardised: common methods and templates are used consistently
  • D. Optimised: no KPIs; improvements occur only after incidents

Best answer: D

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: In AIPG-CMM, Optimised represents a mature, metrics-led capability with continuous improvement and proactive tuning of controls. A description that lacks KPIs and relies on post-incident fixes reflects low maturity rather than Optimised. The other descriptions align to early-to-mid maturity characteristics (Ad Hoc, Initialised, Standardised).

The AIPG-CMM levels describe how consistently and effectively AI project governance is performed and improved.

  • Ad Hoc: largely informal, inconsistent, and person-dependent.
  • Initialised: some repeatable practices and emerging roles, but not yet consistent across the organisation.
  • Standardised: defined standards, templates, and minimum controls are used consistently.
  • Enterprised: governance is embedded enterprise-wide (shared tooling, clear decision rights, integrated assurance across units).
  • Optimised: performance is measured with KPIs/metrics and feedback loops drive continuous, proactive improvement.

A description of Optimised that is reactive and lacks KPIs contradicts the core idea of continuous improvement.

Optimised maturity is metrics-driven and continuously improves controls, not reactive without KPIs.


Question 31

Topic: Principles for Responsible and Trustworthy AI

A claims operations team is using a GenAI assistant to draft insurance claim decision letters. The project is rated High risk because incorrect denials could cause significant customer harm and complaints. Foundation activities are complete, but the AI Assistance Plan currently says only “human review as needed.” The team is about to begin Activation and wants to auto-send letters to meet a new SLA.

What is the best next step in the AIPGF stage flow?

  • A. Define mandatory HITL checkpoints and decision rights; update the AI Assistance Plan
  • B. Start auto-sending letters and rely on post-release monitoring
  • C. Proceed to Evaluation and document lessons learned after the first month
  • D. Ask the vendor for additional model transparency documentation before piloting

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Before Activation can safely operationalise AI-assisted outputs, Human-Centricity requires clear human accountability and control for high-impact decisions. For a high-risk use case like claim denials, “human review as needed” is not a sufficient control. The next step is to specify where humans must review/approve, what escalation applies, and who owns the final decision, and capture this in the AI Assistance Plan.

This scenario is about placing HITL checkpoints proportionate to criticality and risk, a core application of the Human-Centricity principle. Because incorrect denials can materially harm customers, the governance control must be explicit before any move to auto-send.

Appropriate next-step actions in Activation are to:

  • Define the required HITL checkpoints (e.g., mandatory review for denials or edge cases)
  • Assign decision rights and escalation paths (who signs off, when to stop/rollback)
  • Update the AI Assistance Plan so the control is auditable and enforceable

Monitoring and later evaluation are important, but they do not replace upfront definition of human approval points for high-impact outcomes.

High-risk, customer-impacting decisions require explicit, agreed HITL checkpoints and accountable approvals before operationalising automation.


Question 32

Topic: Assurance, Metrics, and Continuous Improvement

A public-sector PMO has 12 active projects using GenAI for drafting policy briefs and customer communications. An internal audit has asked for a defensible baseline of “current governance capability” across the programme before approving further scale-up.

You have been asked to use the AIPG-CMM assessment instrument to capture the current state and feed an improvement roadmap. The programme is currently in the Foundation stage and has only partial evidence (some AI Assistance Plans exist, decision logs are inconsistent).

What is the best next step?

  • A. Publish an enterprise improvement roadmap based on audit feedback
  • B. Approve programme-wide scale-up once AI Assistance Plans are drafted
  • C. Run a scoped AIPG-CMM baseline assessment across representative projects
  • D. Start continuous monitoring dashboards and incident KPIs immediately

Best answer: C

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: Before you can prioritise improvements or justify scale-up, you need a consistent, evidence-based view of current capability across projects. In the Foundation stage, using the AIPG-CMM instrument to scope, collect evidence, and baseline maturity provides the defensible “as-is” position the audit is requesting. That baseline then becomes the input to a targeted improvement roadmap and later Activation/Evaluation controls.

The core concept is establishing an evidence-based baseline using an AIPG-CMM assessment instrument before making governance investment decisions. In the Foundation stage, the correct sequence is to define the assessment scope (projects/programmes, domains, sampling), collect objective evidence (e.g., AI Assistance Plans, decision logs, risk logs, approval records), and score/aggregate results to describe the current state consistently across the programme. This produces an auditable snapshot of capability and highlights systemic gaps (for example, inconsistent decision logging) that can be translated into a prioritised improvement roadmap and stage-gate criteria for Activation.

Key takeaway: you cannot credibly roadmap, scale, or monitor what you have not first assessed and baselined.

AIPG-CMM should be used now to gather evidence and score current capability consistently before planning improvements or approving scale-up.


Question 33

Topic: Framework Structure and Controls

A service desk wants to use GenAI to draft replies to customers using past ticket text (may contain PII). The team has agreed AIPGF Principles (Human-Centricity, Transparency, Adaptability), defined the aim (reduce response time), and listed proposed activities (prompt library, data minimisation, human review). However, these are not yet consolidated into a governed deliverable, and decision rights for approving AI-assisted responses are unclear.

What is the best next step in the AIPGF stage flow?

  • A. Finalise and approve an AI Assistance Plan with HITL and audit logs
  • B. Run a benefits-realisation review and retire low-value use cases
  • C. Implement production monitoring and incident response triggers immediately
  • D. Start a limited pilot and refine prompts from agent feedback

Best answer: A

What this tests: Framework Structure and Controls

Explanation: Before any Activation use with customer data, AIPGF expects a governed Foundation deliverable that connects principles and aims to concrete behaviours, activities, and controls with clear accountabilities. An approved AI Assistance Plan operationalises Human-in-the-Loop decision points, transparency expectations, and evidence capture (e.g., logs). That creates a defensible go/no-go basis to proceed to piloting and later monitoring/evaluation.

The core concept is that AIPGF elements relate in a traceable chain: Principles and Core Values inform Core Behaviours, which drive the Activities you perform and the Deliverables you produce for governance and assurance. In this scenario, the team has principles, an aim, and a list of activities, but lacks the key Foundation deliverable that makes them actionable and auditable.

The best next step is to complete and obtain approval for the AI Assistance Plan, ensuring it:

  • Translates principles/values into expected behaviours (e.g., transparency and human oversight)
  • Defines decision rights and HITL checkpoints for AI-assisted replies
  • Specifies required evidence (decision logs, prompts/versions, monitoring expectations)

Only then should the project move into Activation activities like piloting and, later, Evaluation activities like benefits reviews and continuous improvement.

This completes the Foundation deliverable that maps principles/aims into behaviours, activities, and accountable controls before Activation use of customer data.


Question 34

Topic: AI in Projects and Organizations (Context)

A project team is using a general-purpose GenAI tool to draft responses for a customer support portal. Drafts are created by pasting recent ticket text (often contains PII) and the portal will later use the same model to suggest replies from user-entered text. Internal audit flags risks of data leakage and prompt injection, but the sponsor insists on keeping delivery speed. Under AIPGF, what is the best next action to balance value, speed, and required security governance?

  • A. Switch to an approved, no-retention environment and add governance controls for PII handling and prompt injection before further use
  • B. Allow continued use if staff are reminded not to paste PII and to manually check outputs
  • C. Stop all AI use until a full enterprise AI security program is completed
  • D. Continue using the tool, relying on the vendor’s standard terms and security controls

Best answer: A

What this tests: AI in Projects and Organizations (Context)

Explanation: The team’s current approach exposes sensitive data and creates an attack path through untrusted user input. The best balance is to keep AI-enabled productivity but move to an approved, controlled environment and implement concrete safeguards (data minimization, access control, logging, and prompt-injection defenses) with clear accountabilities, aligning to Transparency and Human-Centricity.

This scenario contains two primary AI-introduced threats: (1) data leakage from pasting ticket text that includes PII into a general-purpose tool, and (2) prompt injection/model misuse because the portal will feed untrusted user input into AI-assisted response generation. In AIPGF terms (typically during Activation), the governance response should be risk-proportionate and auditable while maintaining delivery momentum.

A balanced control set includes:

  • Use an approved AI environment with no training/retention of customer data and appropriate access controls
  • Apply data minimization/redaction rules for prompts and outputs (especially PII)
  • Implement prompt-injection mitigations (input validation, system-prompt protection, content controls) and test for abuse cases
  • Keep decision/audit trails (AI Assistance Plan, logs) and enforce HITL review for customer-facing messages

This approach is more effective than policy reminders or vendor default assurances because it directly reduces the attack surface and supports assurance.

It preserves delivery pace while adding proportionate controls to prevent data leakage and reduce prompt-injection/model-misuse risk with auditable accountability.


Question 35

Topic: Principles for Responsible and Trustworthy AI

A project team used a GenAI assistant to draft a credit policy change recommendation that will be approved by the bank’s Credit Committee. The sponsor wants to submit it today to meet a board pack deadline.

Exhibit: Decision log excerpt

Decision: D-07 Recommend lowering SME approval threshold
AI use: GenAI drafted analysis + pros/cons
Inputs: “last quarter pack” + staff notes (not attached)
Traceability: prompt/output not stored; citations missing
Control: single reviewer; no second-line review
Impact: affects automated approvals for ~18,000 customers

What is the best next governance action, based on the exhibit?

  • A. Proceed because a single human reviewer is sufficient evidence that the recommendation is trustworthy
  • B. Submit now with a note that GenAI was used, then backfill traceability after the committee meeting
  • C. Pause submission and require an auditable evidence pack (sources, prompt/output record, human rationale) before approval
  • D. Remove mention of GenAI use from the decision record to avoid confusion and keep the timeline

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The exhibit shows a material, customer-impacting decision with missing traceability (no stored prompts/outputs and no citations) and weak review controls. Under AIPGF, transparency requirements for key decisions override schedule convenience, because the approving body must be able to see and audit the basis for the recommendation before relying on it. The appropriate action is to stop and assemble auditable evidence and accountability for the AI-assisted work.

This is a key governance moment where the Transparency principle overrides convenience. The decision affects automated approvals for many customers, yet the record shows the AI-assisted analysis is not auditable (no prompt/output retention, missing citations, and unverified inputs) and assurance is weak (only a single reviewer and no second-line review). Before the Credit Committee can make a defensible decision, the team should create an evidence trail and decision accountability for AI-assisted content.

Practical minimum evidence to restore transparency includes:

  • Attach or reference the underlying source materials used
  • Store the prompts, outputs, and key iterations that shaped the recommendation
  • Document the human rationale for accepting/adjusting AI content
  • Add appropriate independent review/sign-off for high-impact decisions

Disclosing AI use is helpful but does not replace auditability when the decision is material.

This is a high-impact decision, so transparency and auditability must be created before the committee relies on AI-assisted analysis.


Question 36

Topic: Framework Structure and Controls

A retail bank is preparing to move a GenAI assistant for drafting customer email replies from pilot into broader use next month. An internal audit reviewer asks how the bank will ensure staff use AI responsibly and consistently.

Exhibit: AI Assistance Plan (excerpt)

Stage: Activation readiness (draft)
Use: GenAI drafts replies; agent edits and sends (HITL)
Risk tier: High (customer comms; regulated)
Guidelines: "In progress" (not yet published)
Training: Optional 45-min lunch-and-learn; attendance not tracked
Competency check: None defined
Access: Any agent with SSO can enable the tool

Based on the exhibit, what is the best next governance action to define training and enablement controls before go-live?

  • A. Rely on the vendor’s standard user training materials and publish internal guidelines after rollout
  • B. Proceed with go-live because HITL editing is sufficient as the primary control
  • C. Limit the control to a manager broadcast email describing do’s and don’ts, since detailed training slows delivery
  • D. Implement role-based mandatory training with tracked completion and a competency check, and gate access/go-live on passing it

Best answer: D

What this tests: Framework Structure and Controls

Explanation: The exhibit shows a high-risk, regulated use case with unpublished guidelines, optional untracked training, and no competency check while access is open to all agents. AIPGF training and enablement controls should be proportionate to risk and embedded as go/no-go criteria at the Activation gate. The most defensible action is to require role-based training and verify competency before granting access and going live.

Training and enablement controls in AIPGF are how you make “responsible use” operational: clear guidance plus evidence that users understand and can apply it. Here, the risk tier is high and access is broad, yet guidelines are not published and training is optional with no tracking or competency check—this is not Activation-ready.

A proportionate control set is to:

  • Publish role-based AI usage guidelines (including HITL expectations and escalation)
  • Require training completion and attestations (tracked)
  • Run a competency check (scenario-based) before access
  • Gate go-live and tool enablement on meeting these criteria

HITL alone is not a substitute for user competence and consistent practices; the key is enforceable enablement with auditable evidence.

High-risk, regulated use with broad access requires enforced guidance and demonstrated competence before Activation, not optional awareness.


Question 37

Topic: Framework Structure and Controls

A customer-services program wants to add a GenAI “agent assist” feature to draft responses for human agents. Go-live is in 6 weeks, and internal audit has flagged “low traceability of AI-influenced decisions” as a concern. The service is high visibility, but the AI will not send messages directly to customers (HITL required). The project already uses a standard business case and two stage gates: Design approval and Go-live approval.

Which approach best balances fast delivery with required AIPGF governance controls and familiar project governance concepts?

  • A. Accelerate by letting the vendor own AI decision rights and using user disclaimers instead of additional controls or assurance
  • B. Proceed under the existing gates without AI-specific artifacts, since HITL means the feature is just a drafting tool
  • C. Update the business case with AI-specific benefits/harms, add an AI Assistance Plan and decision log as controls, and include go/no-go criteria plus targeted assurance evidence at the next two stage gates
  • D. Pause delivery until an independent external certification is completed for the GenAI capability before any pilot or gate review

Best answer: C

What this tests: Framework Structure and Controls

Explanation: The best tradeoff is to embed AIPGF elements into the project’s existing governance so the AI change is traceable, reviewable, and auditable without stalling delivery. Updating the business case clarifies value and acceptable risk, while AI-specific controls (AI Assistance Plan, decision log) and stage-gate evidence provide proportionate assurance for a high-visibility service under HITL.

This scenario is about mapping AIPGF elements to familiar governance mechanisms: business case (value and risk appetite), stage gates (go/no-go decision points), controls (required ways of working and documentation), and assurance (independent confidence that controls work). With audit pressure and high visibility, the project needs AI-specific traceability, but the HITL constraint allows a proportionate approach rather than a full stop.

A balanced approach is to:

  • Extend the business case to include AI-enabled benefits and potential harms.
  • Implement controls such as an AI Assistance Plan (usage boundaries, escalation, accountability) and decision logging for AI-influenced outputs.
  • Add clear go/no-go criteria and targeted assurance evidence at Design approval and Go-live approval (e.g., testing results, monitoring readiness).

This preserves speed while meeting Transparency and Human-Centricity expectations through auditable decision rights and evidence.

It integrates AIPGF artifacts into existing business case, stage gates, controls, and assurance with proportionate evidence for auditability without creating a parallel process.


Question 38

Topic: Lifecycle Governance (Initiate to Operate)

A benefits team is delivering a GenAI-assisted weekly performance report. After two sprints:

  • Business owners refuse to sign off because it is unclear which insights were AI-generated.
  • Analysts admit they used personal AI tools to draft sections “to save time.”
  • The project cannot show evidence of what prompts/data were used or what reviews occurred.
  • The sponsor says cycle time has not improved and asks to pause rollout.

In the Foundation stage, what is the most likely underlying cause of this governance breakdown?

  • A. AI assistance objectives and scope were not defined
  • B. The team selected an unsuitable GenAI tool
  • C. The project has insufficient benefits tracking metrics
  • D. Stakeholders are resistant to adopting AI outputs

Best answer: A

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: The symptoms point to uncontrolled and inconsistent AI usage: shadow tools, missing traceability, and unclear accountability for AI-generated content. In AIPGF Foundation, the primary control is to establish explicit objectives and scope for AI assistance (including permitted tools, intended use, and what requires HITL review). Without that baseline, sign-off criteria and evidence expectations are impossible to enforce.

This is a Foundation-stage failure to define the objectives and scope of AI assistance (what AI will do, what it will not do, and under what conditions). When the scope is unclear, people optimize for speed by using whatever tools are available (shadow AI), and reviewers cannot determine what they are approving or what evidence should exist. A clear AI Assistance Plan (or equivalent) typically sets: intended tasks and boundaries, approved tools and data sources, required human checks and sign-offs (HITL), and minimum evidence/traceability (e.g., prompt/decision logging). Once those are defined, benefits tracking and tool selection can be assessed against agreed use-cases and acceptance criteria.

Without clear boundaries for what AI will and will not do, teams improvise, evidence is inconsistent, and HITL sign-off cannot be meaningfully applied.


Question 39

Topic: Lifecycle Governance (Initiate to Operate)

In the APMG AI Project Governance Framework (AIPGF), which Principle most directly supports updating organizational guidance (for example, policies, templates, and approved tools) based on evaluation outcomes and lessons learned?

  • A. Adaptability
  • B. Transparency
  • C. Human-in-the-Loop (HITL)
  • D. Human-Centricity

Best answer: A

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Updating policies, templates, and tool approvals after evaluating outcomes is an example of governance learning and continuous improvement. In AIPGF, that behavior is explicitly driven by the Adaptability Principle, which expects governance to evolve as evidence emerges across the lifecycle.

The concept being tested is the AIPGF Principle that governs how an organization responds to what it learns. During the Evaluation stage, teams validate whether the AI solution delivered the intended benefits and whether risks and controls performed as expected. Under the Adaptability Principle, those insights must be fed back into organizational guidance so future projects start with better defaults (for example, updated templates, revised approval criteria, or changes to the list of approved tools). This is how AIPGF makes governance iterative rather than static, strengthening assurance and delivery over time.

Key takeaway: evaluation outcomes should trigger updates to governance assets under Adaptability, not just a project-level closure report.

Adaptability emphasizes continuously refining governance, controls, and guidance based on what is learned during evaluation and operation.


Question 40

Topic: Roles, Responsibilities, and Accountabilities

A bank used a third-party agency to help deliver a GenAI-assisted “complaints response” capability. During the Evaluation gate, Internal Audit asks for traceability evidence (who approved AI-assisted outputs, what prompts/models were used, and how vendor content was reviewed). The project team finds:

  • No recorded HITL sign-offs for responses supplied by the vendor
  • The vendor’s staff used a public LLM outside the bank environment to draft content (“faster turnaround”)
  • The vendor cannot provide prompt/version history or a decision log (“proprietary process”)
  • Benefits are below forecast, and Operations is pushing back on rollout due to low trust

What is the most likely underlying cause of this governance breakdown?

  • A. The project team did not chase the vendor often enough for missing documents
  • B. The GenAI model’s quality was insufficient, so the business lost confidence
  • C. Operations is risk-averse and is resisting change regardless of evidence
  • D. Procurement failed to contractually require audit-ready traceability and HITL accountabilities for vendor AI use

Best answer: D

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The pattern points to a procurement and onboarding control gap: the vendor was not bound to provide traceability artifacts or follow the bank’s HITL approval process. When third parties contribute to AI-assisted deliverables, auditability must be designed into the contract and ways of working (e.g., required logs, evidence, and review/approval rights). Without those requirements, missing sign-offs and “proprietary” black-box creation are predictable outcomes.

This is primarily a third-party governance and procurement failure, not a documentation follow-up problem. When a vendor contributes to AI-assisted deliverables, auditability and traceability depend on upfront, enforceable requirements: what evidence must be produced (prompt/model/version history, data sources, review records), who has approval authority (HITL), and what tools/environments are permitted.

In practice, this is addressed by building governance controls into the SOW/contract and vendor onboarding, for example:

  • Mandate traceability artifacts (AI Assistance Plan, decision log, version history)
  • Define HITL checkpoints and sign-off accountabilities
  • Restrict or approve tool usage (no unapproved public LLMs)
  • Specify audit access, retention, and evidence handover at gates

A model-quality issue or stakeholder attitude may explain dissatisfaction, but it does not explain the absence of an auditable trail and uncontrolled vendor AI usage.

Without explicit contractual evidence requirements and decision rights, the vendor can deliver AI-assisted work without a verifiable audit trail or approved HITL controls.

How to interpret your result

  • 85% or higher: you are probably reading the practitioner scenarios with enough governance discipline for final review.
  • 70-84%: review whether misses come from role confusion, weak evidence judgment, lifecycle gate timing, or poor stakeholder communication.
  • Below 70%: return to focused module pages before another full practitioner run.

Practitioner readiness depends on choosing the best governance action under constraints, not just recognizing AIPGF terminology.

What PM Mastery adds after this diagnostic

This page gives one complete public diagnostic. PM Mastery adds the larger AIPGF Practitioner bank, scenario drills, mixed timed mocks, progress tracking, and explanations that connect each answer to evidence, controls, decision rights, and stakeholder communication.

Retake protocol

Retake only after you can explain each missed scenario in your own words. If the same governance failure repeats, drill that module before doing another 40-question run.

Continue with full practice

Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Focused topic pages

Free review resource

Read the AIPGF Practitioner guide on PMExams.com for concept review, then return here for PM Mastery practice.

Revised on Thursday, May 14, 2026