Try 40 free AIPGF Foundation questions across the exam domains, with answers and explanations, then continue in PM Mastery.
This free full-length AIPGF Foundation practice exam includes 40 original PM Mastery questions across the exam domains.
The questions are original PM Mastery practice questions aligned to the exam outline. They are not official exam questions and are not copied from any exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some exam sponsors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
For concept review before or after this set, use the AIPGF Foundation guide on PMExams.com.
Set a 40-minute timer and answer the 40 questions in one pass. Keep notes on whether each miss came from governance structure, lifecycle controls, accountabilities, responsible-AI principles, culture, or assurance evidence.
Suggested timing checkpoints:
| Question range | Target elapsed time |
|---|---|
| 1-10 | 10 minutes |
| 11-20 | 20 minutes |
| 21-30 | 30 minutes |
| 31-40 | 40 minutes |
| Item | Detail |
|---|---|
| Issuer | APMG International |
| Exam route | AIPGF Foundation |
| Official exam name | APMG AI Project Governance Framework (AIPGF) Foundation |
| Full-length set on this page | 40 questions |
| Exam time | 40 minutes |
| Topic areas represented | 8 |
| Topic | Approximate official weight | Questions used |
|---|---|---|
| Foundations of AI Project Governance (AIPGF) | 12% | 5 |
| AI in Projects and Organizations (Context) | 12% | 5 |
| Framework Structure and Controls | 13% | 5 |
| Roles, Responsibilities, and Accountabilities | 13% | 5 |
| Principles for Responsible and Trustworthy AI | 13% | 5 |
| Values, Behaviours, and Culture | 12% | 5 |
| Lifecycle Governance (Initiate to Operate) | 13% | 5 |
| Assurance, Metrics, and Continuous Improvement | 12% | 5 |
Topic: Lifecycle Governance (Initiate to Operate)
A project has just closed (AIPGF Stage 3: Evaluation). The team used a generative AI assistant to draft status reports and meeting minutes under an AI Assistance Plan with HITL review and a small prompt library.
At closure, the project manager archives the deliverables but does not document or share lessons learned about AI tool selection, actual usage patterns, and how effective the governance controls were. The organisation plans to reuse the same AI tool on several projects next month.
What is the most likely near-term impact of this omission?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: In the Evaluation stage, documenting and sharing lessons learned is a key mechanism for demonstrating governance effectiveness and enabling safe scaling. If AI tool choices, usage, and control outcomes are not captured, the organisation lacks audit-ready evidence and repeatable guidance. Near term, this typically triggers additional assurance work and slows reuse, reducing trust and timely value realisation.
The core concept is Evaluation-stage learning for governance assurance: closing an AI-assisted project should leave a usable record of what was selected, how it was actually used, and whether controls (such as HITL review and data-handling rules) were effective. If these lessons are not documented and shared, stakeholders cannot confidently reuse the approach, because transparency and auditability are weakened.
In practice, the near-term effects are usually:
The closest tempting alternative is blaming technical model “degradation,” but the omission described is primarily a governance evidence gap, not a model lifecycle change.
Without captured lessons learned, there is little auditable evidence that controls worked, so assurance and governance are likely to require re-validation before reuse.
Topic: Framework Structure and Controls
A delivery team is in the Activation stage and plans to use a generative AI tool to draft parts of a tender specification. The client’s assurance team requires strict auditability: they must be able to trace each AI-assisted clause back to what was input, what the AI produced, and who approved the final wording.
Which option is the most appropriate deliverable to produce in this stage to meet that requirement?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: In Activation, governance is put into operation through concrete controls and artifacts. When strict auditability is the dominant requirement, the key deliverable is recorded evidence that links AI inputs and outputs to human decisions and approvals. A decision log provides the traceability and assurance trail auditors typically need.
The Activation stage focuses on implementing and operating AI governance so AI assistance can be used safely and credibly. A useful way to distinguish items within a stage is:
With strict auditability as the deciding factor, the most appropriate Activation-stage deliverable is an AI decision log (or equivalent record) that captures prompts/inputs, AI outputs, referenced sources where applicable, and HITL approvals for what is released. The key takeaway is that audits require evidence, not just intentions or meetings.
A decision log is a tangible Activation-stage artifact that provides end-to-end traceability and evidence for audit.
Topic: Values, Behaviours, and Culture
A portfolio office is rolling out an AI assistant to draft weekly status reports and risk summaries for multiple projects. Some project managers are resisting because they feel AIPGF governance “slows delivery.”
The programme manager wants to maintain momentum while preserving controls (e.g., transparency of AI use, HITL review, and auditable decisions). Which action should they NOT take?
Best answer: C
What this tests: Values, Behaviours, and Culture
Explanation: Maintaining momentum in AI-assisted delivery means removing friction without removing governance. The anti-pattern is treating approvals and accountability as after-the-fact paperwork, because it breaks transparency and weakens HITL assurance. Momentum should come from proportional controls (pilot, coaching, clear guidance), not bypassing them.
In AIPGF-aligned change management, resistance is addressed by making adoption easy and safe: provide clarity on intended AI use, keep Human-in-the-Loop review for AI-assisted deliverables, and ensure decisions are traceable. “Speed” that comes from skipping or delaying approvals is false momentum, because it creates rework, undermines stakeholder confidence, and reduces auditability when AI output quality (e.g., hallucinations) is challenged.
Practical actions that preserve controls while sustaining pace include:
The key takeaway is to streamline governance steps, not postpone or bypass them.
Retroactively approving AI-assisted outputs weakens control, auditability, and trust even if it feels faster.
Topic: Lifecycle Governance (Initiate to Operate)
A programme team plans to use a generative AI assistant to draft status reports and first-pass requirements. The sponsor asks what AIPGF Stage 2 (Activation) is intended to achieve and how it fits with delivery/execution.
Exhibit: AI Assistance Plan (excerpt)
Stage gate: Move from Foundation to Activation
- Use cases approved; data classification agreed
- HITL roles assigned (Author, Reviewer, Accountable approver)
- Prompt/templates placed under version control
- Logging enabled for prompts/outputs; audit sample plan set
- Team onboarding scheduled; start controlled use in Sprint 1
Which interpretation best matches the intent of Stage 2 (Activation)?
Best answer: A
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Stage 2 (Activation) is where the team implements what was agreed in Foundation so AI-assisted work can start safely during delivery/execution. This includes enabling access, training, setting HITL approval routes, and putting monitoring/logging in place to support control and assurance as work products are created.
Activation is the “make it real” stage: it embeds the AI governance decisions from Foundation into the delivery/execution phase so teams can produce project outputs using AI under controlled conditions. The exhibit shows typical Activation outcomes—assigning HITL decision rights, placing prompts/templates under change control, enabling logging for auditability, and onboarding the team—so that AI use starts in sprints/work packages with oversight.
In delivery terms, Activation maps to executing and controlling work: the team begins using AI for approved use cases while applying the defined controls, evidence capture, and review/approval workflow. Evaluation comes later to assess performance, benefits, and whether to scale, adjust, or stop.
Activation turns Foundation decisions into day-to-day delivery controls, roles, and workflows so AI use can start under governance.
Topic: Foundations of AI Project Governance (AIPGF)
A project team uses a generative AI setup to speed up weekly reporting. A chat-based LLM is connected to the project workspace and automatically:
Recently, changes appeared in the schedule with no documented rationale, prompts cannot be produced for audit, the PMO says the tool was never approved, and stakeholders are challenging the credibility of the updates.
What is the most likely underlying cause of this AI governance breakdown?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The symptoms point to governance not being tailored to the AI tool category and level of autonomy. An integrated capability that updates plans and triggers communications behaves like an AI agent, which requires explicit decision rights, HITL checkpoints, and audit evidence (e.g., prompt/decision logs) within an AI Assistance Plan. Without that upfront governance, unapproved use, undocumented rationale, and stakeholder distrust are predictable outcomes.
In AIPGF, AI governance should start by identifying the AI tool category and the way it is used in the project, because different categories need different controls. A stand-alone LLM used for drafting is typically a copilot-style assist, while an integrated capability that changes project artefacts and initiates communications behaves like an AI agent (higher autonomy).
When an agent-like use is not formally captured in an AI Assistance Plan (Foundation/Activation), teams often miss essential governance controls:
The root cause is therefore misclassification/under-scoping of the AI use, not the downstream symptoms (trust issues or missing files).
The team treated an autonomous, integrated AI agent as a simple copilot/LLM use, so HITL sign-off, approved-tool controls, and prompt/decision logging were never defined.
Topic: Assurance, Metrics, and Continuous Improvement
A public-sector PMO has 12 parallel projects using generative AI to draft business cases, requirements, and stakeholder comms. An internal assurance review finds:
The PMO wants to improve AI governance maturity across the whole portfolio. What is the most likely underlying cause of these symptoms?
Best answer: B
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: The pattern of inconsistent controls across many projects points to a systemic maturity gap, not a single project mistake. Continuous improvement at scale requires capturing lessons and assurance findings, then sharing standard practices (e.g., minimum controls, templates, decision rights) so every project applies a consistent baseline.
At portfolio scale, repeated AI governance failures (unapproved tools, missing prompt/output records, inconsistent HITL sign-off, and uneven benefits) usually indicate a weak continuous-improvement capability rather than isolated non-compliance. Good practice in AIPGF terms is to operationalise learning across projects: use assurance and evaluation findings to update shared guidance, and then embed it into activation for new work.
Practical mechanisms include:
The key takeaway is that raising baseline maturity depends on systematic sharing and standardisation, not only correcting individual symptoms.
Without a shared playbook and feedback loop, projects apply inconsistent controls, tools, and assurance, producing the observed breakdowns at scale.
Topic: Foundations of AI Project Governance (AIPGF)
You are in the Foundation stage of a customer-service transformation programme. The programme manager wants to use generative AI to speed up planning.
Constraints:
Which is the BEST next action?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: An AI-assisted project/programme exists when AI is used to support project management and/or to create project deliverables. In Foundation, the right move is to explicitly record intended AI uses, distinguishing internal PM assistance from AI used in deliverables, and obtain the required governance agreement before the Activation gate while honoring HITL and data constraints.
In AIPGF, a project/programme is AI-assisted when AI is used to help manage the work (e.g., drafting a charter, plans, status reports) and/or to produce deliverables (e.g., customer communications, requirements content, design outputs). The distinction matters because AI used in deliverables typically needs stronger governance: clear decision rights, HITL approval, and evidence that confidentiality and stakeholder trust requirements are met.
Because this is the Foundation stage and the scenario states AI use must be agreed before the Activation gate, the best next action is to capture the intended AI usage in an AI Assistance Plan, explicitly separating:
Then route it through the Activation governance gate for approval and accountability.
It defines the work as AI-assisted, distinguishes PM assistance from deliverable use, and creates the required approval/evidence before Activation.
Topic: Values, Behaviours, and Culture
A programme team is adopting generative AI to draft stakeholder updates and first-pass requirements. The AIPGF repository contains several templates, but the programme is small, time-boxed, and handles some commercially sensitive information.
The sponsor says: “We must follow the AIPGF, but I don’t want bureaucracy. Use the spirit of the framework.”
Which approach BEST reflects embracing the spirit of the AIPGF while maintaining appropriate governance?
Best answer: B
What this tests: Values, Behaviours, and Culture
Explanation: Embracing the spirit of the AIPGF means achieving its governance outcomes (proportionate assurance, clear accountability, transparency, and human oversight) without treating templates as mandatory paperwork. A tailored, lightweight set of artifacts and controls can be sufficient when it is driven by risk and the use case. In this scenario, sensitive information and external communications still require defined boundaries and HITL approval.
The spirit of the AIPGF is outcome-focused: use professional judgement to tailor governance so AI is used ethically, efficiently, and effectively while still being controlled and auditable. Templates are aids, not the goal. In a small, time-boxed programme, you can keep governance lightweight, but you should still preserve core outcomes such as:
This balances Human-Centricity and Transparency with pragmatic delivery, rather than maximizing documentation or speed at the expense of safeguards.
It applies AIPGF intent by scaling controls to risk while preserving accountability, transparency, and auditability.
Topic: Values, Behaviours, and Culture
You are in the Activation stage, producing a steering-committee progress report. A team member used a generative AI tool to draft the “Key Risks and Mitigations” section based on their notes.
Exhibit: AI Assistance Plan (excerpt)
Use case: Draft report narrative (AI-assisted)
Data classification: Confidential (no client identifiers in prompts)
Tool: Approved enterprise AI only
HITL: PM to sense-check; Risk Owner to validate content
Governance gate: Steering committee pack requires evidence of validation
The draft includes confident-sounding risk statements and two mitigations that seem “too generic.” Stakeholder trust is fragile after a recent reporting error.
What is the BEST next action?
Best answer: C
What this tests: Values, Behaviours, and Culture
Explanation: Sensibility behaviours require treating AI-assisted text as a draft that must be sense-checked and validated by accountable humans before it becomes an official deliverable. Here, the AI Assistance Plan mandates HITL validation by risk owners and evidence for the governance gate, and stakeholder trust is already fragile. The best next step is to verify the content against authoritative sources and record the validation trail before submission.
In AIPGF, sensibility behaviours mean avoiding overconfidence in AI outputs, actively sense-checking for plausibility, and validating against authoritative sources with clear accountability. In the Activation stage, AI can accelerate drafting, but the deliverable still must meet governance expectations and maintain stakeholder trust.
Given the AI Assistance Plan constraints, the PM should:
Improving prompts can help wording, but it does not replace human validation and accountable approval for AI-assisted content.
This applies HITL sensibility by checking AI output against accountable sources and capturing validation evidence required by the governance gate.
Topic: Roles, Responsibilities, and Accountabilities
A project team plans to procure a third-party generative AI tool to draft project deliverables. Before any project information is uploaded, the procurement pack requires the supplier to provide evidence of: where prompts/outputs are stored, whether data is used for model training, security controls, retention/deletion, IP ownership of outputs, and what audit logs the project can access.
Which AIPGF principle is this practice primarily applying?
Best answer: C
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: This procurement approach is about ensuring the supplier’s AI use is explainable and inspectable through clear disclosures and evidence (data flows, retention, training use, IP terms, and audit logs). That is the essence of the AIPGF Transparency principle, enabling informed approval and ongoing assurance when using third parties.
In AIPGF, third-party AI tools introduce governance risks (confidentiality, privacy, IP leakage, security exposure, and limited auditability). The Transparency principle addresses these risks by requiring clear, evidence-backed visibility of how the AI service operates and handles project information.
In practice this means requesting (and contracting for) items such as:
Human-Centricity and Adaptability may still matter, but the described control is primarily about disclosure and audit-ready evidence.
It focuses on making third-party data handling, IP, security, and auditability visible and evidenced so decisions can be assured.
Topic: Foundations of AI Project Governance (AIPGF)
You are onboarding to a project where a generative AI assistant will be used. Review the excerpt from the AI Assistance Plan.
AI Assistance Plan (excerpt)
Use case 1: Draft Project Charter (v0.x) from sponsor notes
Use case 2: Propose initial project risks and mitigations
Use case 3: Create stakeholder map (influence/interest grid)
Use case 4: Draft weekly status update for PM review
HITL: PM/risk owner validate before sharing externally
Which interpretation best reflects how the AI tool is intended to assist project work?
Best answer: B
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The exhibit lists typical AI-assisted project tasks: drafting a charter, proposing risks, mapping stakeholders, and drafting communications. These are supportive activities that accelerate PM work while keeping accountability with people. The HITL line confirms humans validate outputs before external use.
In AI-assisted projects, a common pattern is using AI to accelerate knowledge work by creating first drafts and structured analyses, then applying Human-in-the-Loop review before anything is treated as an agreed project artifact. In the exhibit, the AI is explicitly planned to:
These are standard examples of AI supporting project delivery (document drafting, risk assessment support, stakeholder mapping, and communications), not autonomous decision-making or replacing engagement. The key takeaway is “AI drafts; humans validate and approve.”
The plan shows AI drafting the charter, risks, stakeholder mapping, and status communications with human validation.
Topic: Assurance, Metrics, and Continuous Improvement
A PMO is rolling out AI assistance (summarising meetings, drafting status reports, and generating RAID items) across multiple projects. The assurance lead proposes using the AI Project Governance Framework (AIPGF) plus the AI Project Governance Capability Maturity Model (AIPG-CMM) to tailor controls and plan continuous improvement.
The sponsor says, “We’re already mature enough on governance—just implement the framework.”
What should the assurance lead ask to verify first to apply the AIPG-CMM in a way that complements the AIPGF?
Best answer: C
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: AIPG-CMM is a capability maturity model used to assess how developed an organisation’s AI governance practices are and to define a realistic improvement path. To complement AIPGF, you first need a baseline maturity level (supported by evidence) and an agreed target maturity so governance activities, assurance, and metrics can be proportionate to current capability and risk appetite.
The AIPG-CMM is a capability maturity model for AI project governance: it helps an organisation assess its current governance capability (what practices exist and how consistently they are applied) and define a target maturity level to guide improvement. It complements the AIPGF by informing how rigorously to apply AIPGF controls and artifacts across the life cycle (Foundation, Activation, Evaluation), and by turning “use the framework” into a measurable improvement roadmap.
To use a maturity model effectively, the first input is the maturity baseline and the intended target, supported by evidence (e.g., repeatable processes, decision rights, assurance checks, metrics, and auditability). Without that, tailoring and continuous improvement become guesswork.
AIPG-CMM complements AIPGF by baselining and targeting governance capability so controls and improvements can be scaled appropriately.
Topic: AI in Projects and Organizations (Context)
A programme is using an enterprise-approved generative AI assistant to help produce project documentation. The AI Assistance Plan states:
Which use of the AI should the project manager AVOID?
Best answer: A
What this tests: AI in Projects and Organizations (Context)
Explanation: Under AIPGF, AI may assist documentation and decision support only with Human-in-the-Loop controls and transparent labelling. The use to avoid is any practice that lets unverified AI output directly drive governance decisions. Submitting AI-generated benefits figures without evidence or review creates assurance, trust, and auditability failures.
The core distinction is between acceptable AI assistance (drafting, summarising, proposing options) and unacceptable AI substitution for accountable judgment or evidence. In AI-assisted projects, AI outputs can be non-deterministic and may hallucinate, so governance requires HITL review, clear labelling, and decision rights staying with accountable roles. In this scenario, drafting content from approved internal sources and then reviewing/marking it fits Human-Centricity and Transparency and supports auditability. By contrast, sending unreviewed, unverified AI-generated benefits estimates to a steering group turns AI into an implicit decision-maker and introduces unsupported claims into formal reporting, undermining assurance and stakeholder trust.
This bypasses HITL and transparency and risks hallucinated/unsupported figures influencing governance decisions.
Topic: Values, Behaviours, and Culture
In the APMG AI Project Governance Framework (AIPGF), which term best describes the expectation to tailor AI governance to the project’s context while still remaining aligned with organisational policy and relevant external standards?
Best answer: B
What this tests: Values, Behaviours, and Culture
Explanation: The AIPGF principle of Adaptability focuses on pragmatic tailoring: applying the framework in a way that fits the project’s size, risk, and maturity while still complying with mandated organisational controls and external standards. It supports “fit for purpose” governance rather than a one-size-fits-all approach.
Adaptability in AIPGF is the principle that governance for AI-assisted projects should be scaled and tailored to real project conditions (e.g., use-case criticality, data sensitivity, uncertainty, and organisational AI maturity) while staying within non-negotiable constraints such as organisational policy, assurance requirements, and applicable external standards. In practice, it drives proportional controls and evidence: more rigor where risk is higher, lighter touch where risk is low, but never bypassing required approvals, safeguards, or auditability. This embodies the framework’s pragmatic sensibility—use what is needed to govern effectively, not what is merely possible or fashionable.
Adaptability is tailoring governance proportionally to context without breaking required policies or standards.
Topic: Foundations of AI Project Governance (AIPGF)
A programme is introducing an AI tool to draft personalised eligibility decision letters to citizens. The letters will be sent externally and may include personal data.
Exhibit: AI Assistance Plan (excerpt)
Use case: Draft eligibility decision letters
Output use: Sent to citizens with minimal edits
Data: Names, addresses, case notes (PII)
Risk/impact: High (wrong decision may cause harm)
Proposed oversight: Human-on-the-loop
Control: Weekly sample audit of 5% of letters
Based on the exhibit, what is the best next governance action?
Best answer: A
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The exhibit states the output will be sent externally with minimal edits and has high impact if wrong, using personal data. That combination calls for human-in-the-loop oversight so a human explicitly reviews and approves each AI-assisted letter (and its underlying decision) before release, rather than relying on after-the-fact monitoring.
Human-in-the-loop (HITL) means a human must review and approve each AI-assisted output (or the decision it supports) before it is used, creating a hard control point. Human-on-the-loop (HOTL) is supervisory monitoring (sampling, dashboards, exception review) where outputs can proceed without per-item approval.
In the exhibit, the AI output is used externally, includes PII, will be sent with minimal edits, and the impact is rated high. In AIPGF terms, this risk/impact profile requires HITL for release/communication decisions so accountability is explicit and preventable harms are stopped before dissemination. HOTL sampling can still be added as an additional assurance layer, but it is not sufficient as the primary control here.
High-impact, externally used outputs with PII require per-item human review/decision, not only retrospective sampling.
Topic: AI in Projects and Organizations (Context)
During Activation of a public-sector procurement project, the team uses a cloud GenAI assistant to summarise bidder proposals and draft evaluation notes. The AI Assistance Plan lists the information as “Official” but leaves Records retention / prompt-output archiving as “TBD”, and no archive is configured. Two weeks later, a supplier requests the evaluation record under procurement rules.
What is the most likely near-term impact?
Best answer: B
What this tests: AI in Projects and Organizations (Context)
Explanation: Leaving records retention and prompt/output archiving undefined creates an immediate auditability gap. In regulated settings like public procurement, the team must be able to produce decision records and supporting evidence on request. Without retained AI-assisted evaluation artefacts, the project’s risk exposure to audit findings or supplier challenge rises quickly.
A common legal/compliance challenge with AI-assisted work is records retention: if AI-generated or AI-assisted content contributes to decisions, it becomes part of the project record. In public-sector procurement, transparency and traceability are especially important because decisions can be audited or challenged.
When prompts, outputs, and HITL approvals are not retained, the near-term consequence is an inability to evidence how conclusions were reached (who approved what, using which inputs). A proportionate governance response is to update the AI Assistance Plan to specify:
The key takeaway is that missing retention controls shows up quickly as an auditability and challenge-risk issue, not as a performance or training problem.
Without retained prompts/outputs and AI-assisted notes, the project lacks an auditable record to justify procurement decisions.
Topic: Principles for Responsible and Trustworthy AI
During Activation, a programme team plans to use a generative AI tool to produce weekly recommendations for schedule recovery actions. The sponsor will only approve use if the team can show that each AI-assisted recommendation is transparent—it includes the underlying rationale and supporting evidence (not just suggested actions) and is checked via HITL.
Which artifact/evidence best validates this control is designed and ready to operate?
Best answer: B
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Transparency in AI-assisted recommendations requires more than an output—it needs documented rationale, evidence/source traceability, and a defined human check before recommendations are adopted. An AI Assistance Plan is purpose-built to specify these mandatory fields and the HITL review steps, demonstrating operational readiness of the control rather than outcomes or roles alone.
Under the Transparency principle, AI-assisted recommendations should be explainable and defensible: what is being recommended, why, and what evidence supports it. In AIPGF, the strongest readiness evidence is an artifact that defines the control so it can be executed consistently (not just a record of past text).
An AI Assistance Plan can operationalise this by specifying, for the recommendation workflow:
Artifacts that only show benefits, roles, or unstructured AI text do not demonstrate that the transparency control will reliably occur each time.
It directly evidences a defined, repeatable control that forces AI recommendations to include rationale and sources, with human review before use.
Topic: AI in Projects and Organizations (Context)
A PMO has completed two successful pilots using a generative-AI assistant to draft project status reports and risk summaries. The CIO now wants to roll this capability out to 40 projects and asks for evidence that the organization has progressed from “piloting” to “scaling” so governance depth can be increased proportionately (e.g., clearer decision rights, consistent HITL checks, and monitoring).
Which artifact/evidence best validates this readiness in the current context?
Best answer: A
What this tests: AI in Projects and Organizations (Context)
Explanation: To tailor governance depth, the PMO needs evidence of organizational AI adoption maturity, not just pilot documentation or usage/benefit signals. An AIPG-CMM assessment is designed to validate whether repeatable controls, roles, and assurance practices exist to support scaling. It provides a structured basis to justify increasing governance beyond pilot-level oversight.
AI adoption maturity is reflected in whether the organization can apply AI governance consistently beyond isolated pilots. Moving from “piloting” to “scaling” typically requires evidence of repeatable, organization-level capabilities such as defined decision rights and accountabilities, standardized HITL validation, consistent risk and data controls, and readiness to monitor performance and issues across many projects.
An AIPG-CMM (capability/maturity) assessment is the most direct validation because it evaluates these capabilities in a structured way and can be supported by objective evidence (e.g., documented processes, approvals, training/assurance practices, monitoring routines). This is more suitable for proving scale-readiness than artifacts that describe a single pilot or measure adoption/benefits without demonstrating control effectiveness.
A maturity assessment directly evidences whether governance capabilities are in place to move from piloting to scaling.
Topic: Framework Structure and Controls
You are in Foundation for a programme that will use a generative AI assistant to help draft supplier evaluation summaries that feed a -30m contract award recommendation.
Constraints:
What is the BEST next action?
Best answer: A
What this tests: Framework Structure and Controls
Explanation: Because the AI-assisted outputs influence a high-value, challenge-prone procurement decision, the risk/impact is high and stakeholder confidence is a stated constraint. In AIPGF, that combination justifies independent assurance, planned up front and available as evidence at the Activation Go/No-Go gate, alongside the HITL arrangements.
In AIPGF, assurance is scaled to the decision’s risk/impact and to stakeholder expectations for trust and auditability. Here the AI assistance supports a major contract award recommendation, using Commercial-in-Confidence information, and the procurement director explicitly expects independent assurance due to prior challenge history. The best next action in Foundation is therefore to plan and initiate independent review/assurance (and capture it in the AI Assistance Plan) so that the required evidence exists before the Activation Go/No-Go.
Practical outcomes to evidence at the gate include:
Tool choice matters, but it does not remove the need for proportionate independent assurance when the decision and trust stakes are high.
High-impact, contested decisions plus explicit stakeholder expectations require independent assurance to be planned and evidenced before the Activation gate.
Topic: Assurance, Metrics, and Continuous Improvement
A programme is using a generative AI tool to draft status reports and risk narratives. The PMO adds a quarterly review to its AI Assistance Plan to sample AI-assisted outputs, check compliance with data-handling rules, log findings, assign owners to corrective actions, and confirm changes have been embedded in ways of working.
In AIPGF, this practice most directly maps to which life cycle stage intent?
Best answer: C
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: The described activity is an ongoing assurance mechanism: scheduling periodic reviews, sampling outputs, documenting findings, and ensuring corrective actions are implemented. That aligns with the Evaluation stage intent of monitoring AI usage in operation and driving continuous improvement based on evidence. The emphasis is not initial setup or rollout, but sustained review and follow-through.
Planning periodic audits/reviews and following up on findings is a core Evaluation-stage activity in AIPGF. Once AI is being used to support project delivery, governance should include a repeatable cadence to test whether AI assistance is still being used as intended, whether controls are working (e.g., data handling and quality checks), and whether issues are being corrected rather than merely observed.
A good Evaluation loop typically includes:
This goes beyond “being transparent” and focuses on evidence-based assurance and continuous improvement at scale.
It is a planned, periodic assurance loop that audits AI use and tracks follow-up actions for continuous improvement.
Topic: Roles, Responsibilities, and Accountabilities
A programme has three projects using generative AI to draft status reports and requirements summaries. A project team proposes using a public cloud AI assistant to speed up document drafting.
Exhibit: AI Assistance Plan (decision rights excerpt)
Decision: Approve AI tool for INTERNAL use (no sensitive data)
Authority: Project Manager
Decision: Approve shared prompts/templates across projects
Authority: Programme Manager
Decision: Approve EXTERNAL/public AI use or sensitive/client data
Authority: Portfolio AI Governance Board
Based on the exhibit, what is the best next action?
Best answer: C
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The exhibit defines decision rights by risk and scope. Because the proposal involves a public/external AI assistant and could involve sensitive or client data, the approval must be made at the portfolio governance level. This ensures consistent risk appetite, assurance, and accountability beyond a single project or programme.
AIPGF decision rights should place higher-risk, higher-impact, and cross-organisational choices at higher governance levels to match accountability and risk appetite. The exhibit explicitly distinguishes routine internal AI use (project level) from programme-level standardisation decisions (shared prompts/templates) and from decisions that materially increase exposure (external/public AI use or sensitive/client data). Since the proposed tool is public/external and the scenario involves drafting project deliverables that may contain client or sensitive information, the decision must be escalated to the portfolio AI governance body for approval and any required controls (e.g., data handling constraints, HITL checks, assurance evidence).
When decision rights are clear in an AI Assistance Plan, the correct action is to follow the assigned authority rather than shortcutting via local approvals.
The plan assigns external/public AI use or sensitive/client data decisions to the portfolio governance level.
Topic: Assurance, Metrics, and Continuous Improvement
An internal AIPGF Evaluation-stage assessment of an AI-assisted programme management office (PMO) found recurring issues across multiple workstreams.
Exhibit: Assessment findings (excerpt)
- AI tool usage: 3 teams used unapproved generative AI tools.
- HITL: No named accountable approver for AI-generated schedule/risk updates.
- Traceability: Prompts/outputs not stored; decisions cannot be audited.
- Stakeholders: Sponsors report low confidence in AI-assisted reports.
- Benefits: Forecasted cycle-time reduction not evidenced.
Based on these results, what is the most likely underlying cause of the governance breakdown?
Best answer: C
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: The findings show systemic failures across approvals, traceability, and tool selection—not a single team behavior or tool defect. In AIPGF terms, this points to a gap in governance design and activation: roles and decision rights for HITL sign-off were not defined, and minimum controls (approved tools, prompt/output logging, assurance evidence) were not set up. An AI Assistance Plan is the primary artifact that consolidates these requirements proportionately.
Assessment results that show multiple, repeating control failures (unapproved tools, no accountable HITL approver, no auditable records, low trust, and weak benefits evidence) typically indicate a governance activity gap rather than an isolated operational issue. In AIPGF, this most often traces back to inadequate Foundation/Activation work: the programme did not establish and embed an AI Assistance Plan (and supporting RACI/decision log expectations) that defines who can approve AI use, who signs off AI-assisted outputs, what must be recorded for transparency, and how benefits will be evidenced. When decision rights and required controls are unclear, teams improvise, assurance evidence is missing, and stakeholder confidence drops. Fixing the root cause requires defining and operationalising governance, not just addressing individual symptoms.
The pattern of unapproved use, absent HITL accountability, and no audit trail indicates governance was not designed and embedded via an AI Assistance Plan.
Topic: Roles, Responsibilities, and Accountabilities
A programme team uses a generative AI tool to draft the executive summary and key claims for a Board paper requesting approval to proceed to procurement (a go/no-go decision). The organisation requires a clear audit trail showing who approved each claim and what evidence was used.
Which action best defines HITL responsibility for reviewing and signing off this AI-assisted deliverable?
Best answer: D
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: For AI-assisted outputs used to support a high-impact go/no-go decision, AIPGF expects explicit Human-in-the-Loop accountability: a named human with decision rights must review, adjust as needed, and approve the final content. Because the organisation requires strict auditability, the approval and supporting evidence should be captured in governance records (e.g., decision log and the AI Assistance Plan’s defined sign-off points).
HITL accountability in AIPGF means AI can assist, but a designated human remains responsible for the delivered content and its consequences. When an AI-assisted artefact is used to influence a high-impact decision (such as a Board go/no-go) and the organisation requires auditability, governance must make the approval chain explicit.
The appropriate control is to:
This preserves transparency and accountability while still enabling AI assistance; “someone else” (tool vendor or general team ownership) cannot replace accountable human sign-off.
A high-impact, auditable decision requires a clearly accountable human sign-off with recorded rationale and evidence, not an AI tool or delegated “ownership” without approval traceability.
Topic: Framework Structure and Controls
You are the PM for a programme that wants to use a generative AI assistant to speed up delivery by drafting a business case, stakeholder communications, and weekly status reports. No AI Assistance Plan exists yet, and the sponsor has only said, “Use AI wherever it helps.”
To start creating the AI Assistance Plan, what should you ask/confirm FIRST?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: An AI Assistance Plan starts by defining the objectives and scope of AI assistance: what work AI will support, what outputs it may draft, and where humans must review/approve (HITL). Without that boundary, you cannot proportionately set approved tools, data handling rules, oversight, or monitoring expectations.
The first step in creating an AI Assistance Plan is to make the AI use explicit and bounded: the intended objectives and the scope of AI assistance. In this scenario, “use AI wherever it helps” is too vague to govern, so you must confirm which project activities and deliverables AI will support and where human review/approval is mandatory (HITL).
Once the scope is clear, you can then derive the rest of the plan proportionately:
Anything that jumps straight to tool selection, custom builds, or monitoring details is premature without an agreed AI-assistance scope.
Clarifying the objectives and scope of AI assistance (including HITL decision points) is the foundation for selecting tools, setting data rules, and defining monitoring.
Topic: Foundations of AI Project Governance (AIPGF)
You are in the Activation stage of a programme preparing a client-facing benefits case. The team wants to use a generative AI tool to draft the “commercial assumptions” section.
Constraints:
What is the BEST next action?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The situation highlights two key AI output risks: confidentiality/IP leakage from using an unapproved public tool and hallucination in client-facing commercial claims. In Activation, the right governance response is to document and control AI use through an AI Assistance Plan and to enforce HITL review by an accountable SME, producing evidence for assurance at the gate.
AI-assisted drafting of commercial assumptions creates high-impact risks: (1) hallucinated or biased assumptions that could mislead the client, and (2) confidentiality/IP leakage if sensitive pricing or contract terms are entered into an unapproved tool. In the Activation stage, governance should operationalise controls, decision rights, and evidence for assurance.
The best next action is to update and apply the AI Assistance Plan to ensure:
This builds stakeholder trust while keeping delivery moving under controlled, accountable use.
It addresses confidentiality and hallucination risks by using approved tooling, documenting controls, and enforcing HITL accountability before the governance gate.
Topic: Framework Structure and Controls
In the APMG AI Project Governance Framework (AIPGF), which term best describes the governance artifact that documents the objectives and scope of AI use in a project, the approved AI tools, data handling rules, required oversight (including Human-in-the-Loop), and the approach to monitoring AI-assisted outputs?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: An AI Assistance Plan is the AIPGF governance artifact that sets the boundaries and controls for AI use in an AI-assisted project. It clarifies what AI will be used for, which tools are approved, what data may be used, and how humans will review and monitor AI-assisted outputs across the work.
The core concept is that AI-assisted work needs explicit, auditable controls beyond a normal project plan because AI outputs can be non-deterministic and may introduce confidentiality, privacy, bias, or hallucination risks. In AIPGF, the AI Assistance Plan is the primary artifact used to govern AI use on a project by documenting, in one place, the intended objectives and scope for AI assistance, the approved tools and usage constraints, the data handling rules (including sensitive and proprietary information), the required Human-in-the-Loop oversight and decision rights, and how AI-assisted outputs will be monitored and reviewed over time. It provides a clear basis for stakeholder assurance and for consistent, repeatable use of AI within agreed guardrails.
Key takeaway: it is about governing AI use and oversight, not merely planning the project or describing a model.
It is the artifact that defines how AI may be used, with approved tools, data rules, oversight, and monitoring expectations.
Topic: Roles, Responsibilities, and Accountabilities
A programme team uses a generative AI assistant to draft weekly highlight reports and risks. In the last month, the following occurred:
The PM says, “We already have a sponsor, PM, and PMO—so I assumed AI governance would be covered.”
What is the MOST likely underlying cause of this AI governance breakdown?
Best answer: B
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The symptoms point to missing accountability rather than a single operational lapse. In AIPGF, AI governance should be integrated into existing project roles by defining decision rights for AI tool approval, HITL sign-off, and audit evidence (e.g., prompt/output capture). When those responsibilities are not explicitly assigned, teams improvise, controls are skipped, and trust and benefits erode.
This is a roles-and-accountabilities failure: the programme has standard roles (sponsor, PM, PMO) but has not explicitly embedded AI governance responsibilities into them. In AIPGF terms, AI-assisted outputs still need clear decision rights and accountable approvers (HITL), plus agreed controls for transparency and assurance such as documenting prompts/outputs and enforcing approved-tool use.
A practical fix is to update existing governance artifacts (e.g., an AI Assistance Plan and RACI/decision log) so that:
Focusing only on one symptom (like prompt storage) won’t resolve the broader accountability gap.
Without explicit accountability (e.g., PM/sponsor/PMO) for AI tool use, HITL sign-off, and recordkeeping, these controls predictably lapse.
Topic: Lifecycle Governance (Initiate to Operate)
During the Foundation stage of a transformation programme, the PMO starts using a generative AI tool to draft project deliverables (communications, RAID items, and a procurement approach). Within a month:
Which is the most likely underlying cause of this governance breakdown?
Best answer: A
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: The earliest causal failure is not establishing whether the programme’s data was available, fit-for-purpose, and legally/contractually permitted for the chosen AI usage. That gap leads directly to people pasting confidential material into unapproved tools and then losing stakeholder trust when it is discovered. In AIPGF, this should be addressed in Foundation via defined constraints and approvals for AI-assisted work.
In the AIPGF Foundation stage, AI-assisted work should only start after confirming the intended data sources are (1) available and of sufficient quality for the use case, and (2) permitted for that use under confidentiality, privacy, IP, and contract terms. If data rights and classification are not assessed and translated into clear rules (e.g., what can/cannot be entered into AI tools, and which tools are approved), teams often improvise with convenient public tools and sensitive inputs. That typically causes the downstream symptoms seen here: the tool gets blocked, benefits evaporate, stakeholders lose trust, and auditability/HITL approvals become hard to demonstrate. The core fix is to establish data constraints and permissions up front and bake them into the AI Assistance Plan and approval workflow.
Without confirming what data can be used and under what rights, teams default to unsafe tools and inputs, triggering the other failures.
Topic: Assurance, Metrics, and Continuous Improvement
A programme is using a generative AI assistant to draft stakeholder communications and meeting minutes. A maturity assessment in the Evaluation stage produced the draft improvement roadmap below.
Improvement roadmap (draft)
Finding: Low Transparency (no record of prompts, versions, or HITL reviews)
Action: Create AI Output Register + HITL review checklist
Trigger: Before expanding AI use to external-facing comms
Owner: TBD
Due date: TBD
Assurance checkpoint: Activation Gate in 4 weeks
Status: Proposed
Which next action best interprets what is missing, based on the exhibit?
Best answer: B
What this tests: Assurance, Metrics, and Continuous Improvement
Explanation: The exhibit shows a clear governance improvement action but leaves Owner and Due date as TBD. Under AIPGF continuous improvement, actions from assessments need explicit accountability and a timeline so progress can be monitored and verified. With an Activation Gate in 4 weeks, the action should be time-bound to that checkpoint.
In AIPGF assurance and continuous improvement, assessment findings should translate into an improvement roadmap where each action is actionable, measurable, and governable. That requires (at minimum) a named accountable owner (role or person with decision rights to implement the control) and a target date that aligns with relevant gates or assurance checkpoints.
Here, the programme plans to expand AI use to external-facing communications, while the finding highlights a Transparency control gap (no traceability of prompts/versions/HITL reviews). Because an Activation Gate is scheduled in 4 weeks, the improvement action should be assigned and scheduled to be completed (or demonstrably underway with evidence) before that gate so it can be assured before higher-exposure use.
Deferring or outsourcing accountability weakens governance and makes assurance untestable.
Improvement actions must be owned and time-bound so they can be assured and tracked before the next governance checkpoint.
Topic: Principles for Responsible and Trustworthy AI
A project team proposes using a generative AI tool to draft a stakeholder-facing “benefits realisation plan” and meeting summaries.
Exhibit: Proposed AI usage (excerpt)
- Users: PMO analysts
- Input: project status notes and action items
- Output: draft benefits plan + meeting minutes
- Controls proposed: “PM reviews before sending”
- Not addressed: disclosure to recipients, rationale/traceability of AI text,
feedback loop to improve prompts/controls if errors recur
Which evaluation approach best applies Human-Centricity, Transparency, and Adaptability together to decide whether to approve this AI usage?
Best answer: D
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Approving AI-assisted deliverables needs a balanced check against all three AIPGF principles. Human-Centricity is addressed by defined HITL review focused on stakeholder impact, Transparency by disclosing AI assistance and maintaining traceability, and Adaptability by monitoring outcomes and improving controls when issues emerge. The best evaluation approach explicitly covers all three.
Using the three AIPGF principles together means you do not approve AI usage based on a single safeguard (like “PM reviews”) or a single concern (like disclosure). In this scenario, the AI produces stakeholder-facing deliverables, so governance should evaluate the approach by ensuring:
An approval decision is strongest when these are combined into a coherent assurance approach rather than treated as separate, optional add-ons.
It combines user impact oversight (HITL), openness about AI use and traceability, and a plan to adapt based on performance and feedback.
Topic: Roles, Responsibilities, and Accountabilities
A small agency is using a generative AI tool to draft the requirements baseline for a public-facing benefits system. Because the team is small, Alex is both the Project Manager and the designated AI Governance Lead.
The sponsor states that, to “keep things fast,” Alex should also approve the AI-assisted requirements baseline for release into configuration control. Stakeholders have low trust in AI outputs and the project must be auditable.
What is the MOST appropriate AIPGF role action to allow one person to hold multiple roles while avoiding conflicts of interest?
Best answer: A
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: AIPGF allows one person to fulfil multiple roles, but conflicts of interest must be actively managed through clear decision rights, transparency, and independent challenge. Here the output is a controlled baseline deliverable with low stakeholder trust and auditability needs, so approval must sit with an independent accountable role and be recorded for traceability.
When one person holds multiple governance and delivery roles, AIPGF focuses on making accountability explicit and preventing “self-approval” on high-impact, low-trust, or highly auditable AI-assisted outputs. In this scenario, Alex would be overseeing AI use and also benefiting from rapid acceptance of an AI-assisted baseline, so approving the baseline creates a conflict of interest.
The proportionate control is to:
Capturing prompts or relying on tool logs supports transparency, but it does not replace independent accountability for the approval decision.
Separating approval from creation/governance and recording it preserves auditability while still allowing Alex to hold multiple roles.
Topic: Principles for Responsible and Trustworthy AI
A programme office wants to use an AI assistant to triage and route change requests. The vendor demo includes an “auto-approve” feature for low-effort changes, but some changes can affect customer outcomes and regulatory commitments.
Before go-live, the assurance team asks for evidence that human-centricity overrides automation for high-impact approvals (i.e., a clear Human-in-the-Loop control with accountable decision rights).
Which artifact/evidence best validates this control is defined and ready to operate?
Best answer: A
What this tests: Principles for Responsible and Trustworthy AI
Explanation: To show human-centricity overriding automation, the evidence must document a concrete Human-in-the-Loop control for high-impact approvals, including who is accountable and what cannot be automated. The AI Assistance Plan is the most direct artifact for defining AI use boundaries, required human sign-offs, and how the control will be evidenced and audited at go-live.
Human-centricity in AIPGF means automation must not displace human judgment where decisions have material impact on people, obligations, or significant risk exposure. In this scenario, “auto-approve” is only acceptable when governance explicitly constrains it and ensures a Human-in-the-Loop approval for high-impact changes.
The strongest validation artifact is one that operationalizes the control, for example by documenting:
An AI Assistance Plan is designed to set these boundaries and checkpoints for AI-assisted work, demonstrating readiness to operate the control rather than merely describing outcomes or risks.
An AI Assistance Plan can explicitly prohibit automated approval for high-impact changes and define the human approval checkpoints, decision rights, and audit evidence.
Topic: Values, Behaviours, and Culture
A project manager uses an AI assistant to draft a benefits forecast and a delivery plan. The tool recommends dropping several validation activities to meet an aggressive date, but the manager considers organisational risk appetite and past audit findings, consults the QA lead, and revises the plan to keep the controls while re-sequencing lower-value tasks instead.
Which AIPGF Core Value is being demonstrated most directly?
Best answer: D
What this tests: Values, Behaviours, and Culture
Explanation: This behaviour reflects Sensibility: using AI-generated suggestions as input, then applying human judgment, context, and experience to reach a practical and responsible project decision. The manager weighs risk appetite and evidence from similar work, and adjusts the AI output to fit real constraints and assurance needs.
Sensibility in the AIPGF Core Values is about treating AI outputs as decision support, not decision authority. In AI-assisted project work, AI can propose plans, forecasts, or content that looks plausible but may be misaligned with the project’s context (risk tolerance, assurance expectations, stakeholder impact, and organisational history). Demonstrating Sensibility means a human evaluates the AI suggestion, brings domain expertise and situational awareness, and then modifies or rejects the output to deliver an outcome that is realistic and responsible.
A good practical pattern is:
This differs from simply making AI usage visible or tailoring governance structures.
It shows balanced use of AI outputs with human judgment and project context rather than accepting recommendations blindly.
Topic: Principles for Responsible and Trustworthy AI
During Activation, a programme allows an AI writing assistant to draft sections of the business case and RAID log updates. The AI Assistance Plan requires storing the final documents in the repository but does not require recording prompts, model/version used, sources provided to the tool, or the named human approver for each AI-assisted change.
Two weeks later, a major cost assumption in the approved business case is found to be wrong, and the steering group requests an immediate review of how the figure was generated and approved.
What is the most likely near-term impact of this transparency omission?
Best answer: D
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Transparency enables audits and post-incident investigations by making AI-assisted decisions reproducible and attributable. Here, missing records of prompts, sources, model/version, and human approvals means the steering group’s review lacks evidence to trace how the incorrect assumption entered an approved baseline. The immediate consequence is reduced auditability and weakened assurance.
Under the Transparency principle, AI-assisted outputs should be traceable: what was asked (prompts), what inputs/sources were used, which model/version produced the draft, what changed, and which human approved it (HITL). In the scenario, the programme only retained final documents, so when an incident occurs there is no reliable chain of evidence to recreate the decision path or validate that appropriate review happened. This directly undermines audit and review activities and slows or blocks post-incident investigation, increasing near-term risk exposure and eroding confidence in the governance process. The key takeaway is that storing outputs alone is not sufficient; transparency needs decision provenance and approval records.
Without an audit trail (prompts, sources, versions, approvals), the team cannot evidence how the AI-assisted content was produced or validated.
Topic: Lifecycle Governance (Initiate to Operate)
A programme is in Stage 2 (Activation) using AI to draft status reports and a draft stakeholder comms pack. In Stage 1 (Foundation), the governance baseline was agreed and recorded in the AI Assistance Plan: approved internal LLM only, prompts stored in a shared repository, and HITL sign-off required before any external release.
Midway through Activation, the team expanded scope to include drafting customer-facing FAQs and switched to a public cloud LLM using a new export of service-desk tickets as source material. Two sprints later, assurance finds: no HITL sign-off evidence, prompts are undocumented, a business owner reports “we don’t trust these outputs,” an unapproved tool is in use, and expected time savings have not materialised.
Which is the most likely underlying cause of this governance breakdown?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: In AIPGF, material changes during Activation—such as switching tools, introducing new data sources, or expanding to external-facing deliverables—should trigger revisiting the Stage 1 governance baseline. Without re-approving updated decision rights, controls, and HITL checkpoints, predictable symptoms appear: unapproved tool use, missing evidence, and reduced trust and benefits.
The core concept is cross-stage governance: Activation work must stay within the Stage 1 (Foundation) decisions captured in artifacts like the AI Assistance Plan, decision log, and risk register. When the team changed all three governance-critical elements (scope to customer-facing outputs, tool to a public LLM, and data to service-desk tickets), they crossed the boundaries of the original approvals and controls.
That kind of change should trigger a targeted “return to Foundation” to:
If this revisit doesn’t happen, the programme lacks enforceable controls and auditability, which then shows up as missing HITL sign-off, undocumented prompts, stakeholder distrust, and benefits shortfall.
A material change in scope, tool, and data should have triggered a return to the Foundation baseline (AI Assistance Plan/decision rights) to update controls and approvals.
Topic: AI in Projects and Organizations (Context)
A project team wants to use a public generative AI tool to summarise customer-support tickets into user stories during the Activation stage. The tickets contain names, contact details, and complaint narratives (personal data). The tool’s standard terms state prompts may be processed outside the organisation’s region.
The organisation’s governance requires formal approval before any new third-party processing of personal data.
Which stakeholder action is MOST appropriate to address the primary concern?
Best answer: D
What this tests: AI in Projects and Organizations (Context)
Explanation: Because personal data will be processed by an external AI service, the dominant decision context is privacy and lawful/acceptable data handling. AIPGF expects the PM to involve the stakeholder with decision rights for third-party processing of personal data, ensuring appropriate approvals and documented controls before activation.
In AI-assisted projects, the key stakeholder to engage depends on the dominant risk factor. Here, the decisive factor is third-party processing of personal data (including potential cross-region processing), which triggers formal privacy/compliance approval. Under AIPGF, the PM should route this through legal/compliance so decision rights are exercised and evidence is captured before the team uses the tool.
Practical role actions include:
Technical security testing or user validation may still be useful, but they do not resolve the approval requirement for personal-data processing.
Legal/compliance is accountable for approving new third-party personal-data processing and required assurances before use.
Topic: Principles for Responsible and Trustworthy AI
A project team is introducing an AI assistant to draft status reports and recommend schedule recovery actions. The project manager requires that:
Which AIPGF principle is most directly being applied?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: The described controls ensure humans remain in control of outcomes: the AI is advisory, people are trained to use it appropriately, and there is explicit human review and override. This is the essence of the Human-Centricity principle in AIPGF—designing and governing AI assistance to enhance human decision-making while keeping accountable roles clearly responsible.
Human-Centricity in AIPGF means AI assistance is governed to preserve human agency and responsibility: people understand how to use the AI, remain accountable for decisions, and can question, override, or stop AI outputs when they do not fit stakeholder needs. In the scenario, training on limitations reduces inappropriate reliance, and a defined HITL approach ensures AI-generated drafts and recommendations are reviewed and approved by accountable humans before use. This keeps the project’s decisions, communications, and impacts aligned with human judgement rather than automated optimisation.
A close-but-different concept is Transparency, which focuses more on explainability and disclosure than on decision ownership and human control.
It prioritises human agency, accountability, and HITL oversight so AI supports—rather than replaces—responsible project decision-making.
Topic: Framework Structure and Controls
A project team wants to introduce a generative AI assistant to draft and summarise contract variations. The tool has not yet been used on project data. The sponsor requires strict auditability because internal audit will review how AI-assisted content was produced and approved.
Which action best applies the AIPGF life cycle stages and their intent?
Best answer: A
What this tests: Framework Structure and Controls
Explanation: Because the AI assistant has not yet been used, AIPGF expects governance to be set up first. The Foundation stage’s intent is to define and agree how AI will be used, who is accountable for approvals, and what controls and evidence are needed. This upfront baseline enables trustworthy, auditable AI-assisted delivery in later stages.
AIPGF separates governance into three life cycle stages so controls are applied at the right time. In this scenario the decisive factor is that AI use is about to begin and strict auditability is required, so the team should be in Foundation.
HITL during Activation supports quality, but it should not substitute for the upfront governance baseline set in Foundation.
Foundation establishes the governance baseline and assurance expectations before activating AI use, which is critical when auditability is required.
Topic: Lifecycle Governance (Initiate to Operate)
During Foundation, a programme team wants an AI tool to help draft project deliverables (PID sections, supplier evaluation notes, and stakeholder updates). Drafts will include commercially sensitive information and some personal data from stakeholder registers.
Governance constraints set by the organisation:
Which tool-selection approach should the project NOT use?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Tool selection in AIPGF Foundation must satisfy constraints and enable assurance, not just improve speed. Where deliverables contain sensitive or personal data, governance requires controlled services, audit trails, and clear HITL approvals before release. A consumer chatbot used informally is the clearest anti-pattern because it undermines confidentiality and evidencing of AI-assisted outputs.
In the AIPGF Foundation stage, selecting AI tools is a governance decision: the tool must fit the work while meeting constraints for confidentiality, auditability, and accountable approval. Because AI outputs can be non-deterministic, assurance depends on being able to evidence how outputs were produced (prompt/output capture, retention, access control) and to apply Human-in-the-Loop review before anything is issued externally.
A public consumer chatbot used without organisational controls is typically an uncontrolled third-party service: it can create data leakage and makes it hard to retain reliable records for assurance. By contrast, contracted enterprise services, self-hosted deployments, and workflows that include redaction plus an AI Assistance Plan and explicit human approvals are all compatible with the stated constraints.
This breaches the constraint against sending sensitive data to uncontrolled third parties and provides weak auditability for assurance.
Topic: AI in Projects and Organizations (Context)
A project team has 2,000 free-text employee survey comments to inform a change management plan. The comments may include personal identifiers. The sponsor wants a fast, consistent output: (1) key themes with representative (anonymised) quotes and (2) a draft narrative for the change plan. The organisation allows AI tools only if an AI Assistance Plan is in place and a Human-in-the-Loop (HITL) approves any AI-assisted content before release.
Which approach best balances effective use of the right AI discipline with proportionate governance controls?
Best answer: A
What this tests: AI in Projects and Organizations (Context)
Explanation: Because the required outputs include drafting a narrative and summarising large volumes of text, generative AI is the most suitable discipline. Using it on anonymised/redacted inputs and keeping a HITL approval step aligns to Human-Centricity and Transparency while delivering efficiency and consistency. Documenting the approach in an AI Assistance Plan makes the controls auditable and repeatable.
The core distinction is task fit: machine learning is typically used to learn patterns for classification or prediction, NLP focuses on analysing and structuring language, and generative AI creates new text (drafts, summaries, rewrites). Here, the sponsor explicitly needs a draft narrative and consistent summaries, which is a generative AI use case.
To keep this ethical and governable in project work, the team should apply proportionate controls already required by the organisation:
NLP-only clustering can help, but it does not meet the “draft narrative” need as efficiently.
Generative AI best fits drafting the narrative and summaries, and redaction plus HITL approval provides proportionate governance for privacy and accountability.
For AIPGF Foundation, a weak score often means the role, lifecycle gate, or control purpose is blurred. Fix the pattern before retaking the same public set.
This page gives one complete public diagnostic. PM Mastery adds the larger AIPGF Foundation bank, module drills, mixed timed mocks, progress tracking, and explanations for governance roles, stage gates, assurance evidence, and responsible-AI controls.
Retake only after reviewing every miss and drilling the weakest modules. A fast retake mostly measures answer memory; a delayed retake after focused repair is a better check of governance reasoning.
Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Foundation guide on PMExams.com for concept review, then return here for PM Mastery practice.