Try 10 focused AIPGF Foundation questions on Foundations of AI Project Governance (AIPGF), with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | AIPGF Foundation |
| Topic area | Foundations of AI Project Governance (AIPGF) |
| Blueprint weight | 12% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Foundations of AI Project Governance (AIPGF) for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 12% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Foundations of AI Project Governance (AIPGF)
A PMO has been piloting a generative AI assistant to draft weekly project status reports using internal data. A near-miss occurred when a user pasted client-sensitive content into a prompt, and the draft report also included unverified statements.
The steering committee will only scale the AI assistant if they can reduce operational risk, stay aligned to organisational policies, and maintain stakeholder confidence. What is the best next step in the AIPGF flow?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: Because the decision to scale depends on risk mitigation, policy alignment, and stakeholder trust, the programme should first formalise governance in the Foundation stage. That means agreeing the rationale for adopting AIPGF, defining scope and decision rights for AI-assisted outputs, and producing an approved AI Assistance Plan before wider Activation.
The primary drivers for adopting AIPGF are to reduce AI-related uncertainty and risk (e.g., confidentiality leaks and hallucinations), align delivery with organisational policies and assurance expectations, and strengthen stakeholder trust so AI-assisted outputs can be used with confidence. In this scenario, scaling is explicitly conditional on those drivers, so the next step is to complete Foundation work: define the intended AI assistance, responsibilities and approvals (including HITL), and document controls and evidence in an AI Assistance Plan ready to operationalise in Activation. Only after that foundation is agreed should the organisation expand use, monitor, and later evaluate benefits and lessons learned.
Key takeaway: do not scale or “prove value” further until governance prerequisites for safe and trusted use are in place.
Formalising the drivers and controls in Foundation provides the basis for risk mitigation, compliance alignment, and stakeholder trust before Activation.
Topic: Foundations of AI Project Governance (AIPGF)
A project is run using an initiate/plan/deliver/close life cycle with formal gate reviews at the end of initiate and plan. Midway through delivery, the team starts using a generative AI tool to draft requirements and test scripts.
Symptoms now reported: no documented prompts or AI output assumptions, no HITL sign-off on AI-assisted deliverables, some team members used an unapproved tool despite an existing corporate approved-tool list, stakeholders say outputs are “untrustworthy,” and expected time savings are not being realised or tracked.
What is the most likely underlying cause?
Best answer: B
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The pattern of issues across approvals, traceability, tool use, trust, and benefits indicates a systemic governance integration gap rather than a single execution mistake. In AIPGF, mapping Foundation, Activation, and Evaluation activities into the project’s initiate/plan/deliver/close stages ensures the right artefacts, decision rights, and HITL controls happen at defined points. When that mapping is missing, teams adopt AI ad hoc and bypass intended gate controls.
This is a governance integration failure: the project’s method has gate reviews, but AIPGF was not embedded into them, so AI use began in delivery without the expected governance “hooks.” A practical mapping is:
Because these checkpoints weren’t mapped, symptoms appear across multiple control areas at once, not just in documentation or tooling.
Without mapping AIPGF to initiate/plan/deliver/close, the project never embedded required AI planning, approval, and evaluation controls into its gate reviews and work processes.
Topic: Foundations of AI Project Governance (AIPGF)
In the APMG AI Project Governance Framework (AIPGF), which statement best describes a typical exit criterion for moving from the Foundation stage to the Activation stage in an AI-assisted project?
Best answer: B
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: Progressing from Foundation to Activation requires evidence that AI use is ready to be applied under control. A typical exit criterion is an approved AI Assistance Plan that defines the intended AI use, the HITL approach, key risks/controls, and who is accountable for approving AI-assisted outputs. This ensures Activation begins with clear governance rather than ad hoc tool use.
AIPGF stage progression should be gated by practical entry/exit criteria that show governance readiness. The Foundation stage establishes how AI will be used and controlled; the Activation stage is where AI assistance is actually applied to produce project work products. Therefore, a typical Foundation exit criterion is approval of an AI Assistance Plan (or equivalent) that sets out the use cases, defines Human-in-the-Loop checkpoints, records decision rights/accountable approvers for AI-assisted outputs, and identifies key risks with agreed controls. This provides transparency and proportional assurance before wider use begins. Merely selecting a delivery method, deploying a tool, or claiming benefits is not sufficient evidence that AI-assisted work will be governed and auditable.
Activation should start only after the AI use is governed: planned, risk-assessed, and accountabilities agreed.
Topic: Foundations of AI Project Governance (AIPGF)
A small internal project (6 weeks, 4-person team) is producing a communications pack for a system change. The team wants to use an organisation-approved generative AI tool to draft staff emails and meeting minutes using only non-sensitive internal context. Outputs will be sent to ~500 employees, and the sponsor wants “lightweight” governance but insists on clear accountability.
Which action SHOULD AVOID when tailoring AIPGF activities and artifacts for this project?
Best answer: A
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: AIPGF tailoring means scaling governance to the context while preserving Human-in-the-Loop and clear decision rights. Even in a small, low-sensitivity project, communications to a large internal audience can create trust and reputational impacts. Lightweight artifacts and controls are appropriate, but pre-release human approval must remain explicit and accountable.
The core tailoring concept in AIPGF is proportional governance: keep activities and artifacts as light as the project allows, but do not dilute HITL or accountability for AI-assisted outputs. In this scenario, the project is small and uses non-sensitive data, so a concise AI Assistance Plan and a single risk entry can be sufficient evidence of controlled AI use. However, because the AI drafts content that will be distributed widely, there must be an identified human who is accountable for approving the final versions before release.
Practical, proportionate tailoring here includes:
Automation may support drafting and formatting, but release decisions must remain human-owned.
It removes HITL and weakens accountability by releasing AI outputs before a named human approves them.
Topic: Foundations of AI Project Governance (AIPGF)
In the APMG AI Project Governance Framework (AIPGF), which definition best describes an AI Assistance Plan?
Best answer: A
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: An AI Assistance Plan is structured governance: it clarifies what AI can be used for, under what constraints, and who remains accountable for AI-assisted outputs. By setting decision rights, data boundaries, and assurance expectations, it keeps governance focused on controlled AI use and stakeholder trust rather than bureaucracy.
In AIPGF, an AI Assistance Plan is a practical governance artifact used to integrate AI into delivery in a controlled, scalable way. It makes AI usage transparent and accountable by documenting where AI will support work (e.g., drafting deliverables), what inputs are permitted (privacy/IP boundaries), what controls apply (such as HITL review and evidence retention), and who has approval authority for AI-assisted outputs. This turns “AI use” into governed project activity with clear outcomes: reduced unmanaged risk, higher trust, and auditable decisions. It is not meant to over-document; it is tailored proportionately to the project’s risk and complexity so governance remains outcomes-focused rather than bureaucratic.
It is an outcomes-focused control document that makes AI use explicit, accountable, and auditable rather than adding unnecessary process.
Topic: Foundations of AI Project Governance (AIPGF)
You are the PM for an internal transformation project. The team wants to use a generative AI tool to speed up early-stage deliverables: a draft project charter, an initial risk list, a stakeholder map, and a first set of status-update templates.
Your organisation’s AI governance requires: use of approved tools only, no client/personally identifiable data in prompts, Human-in-the-Loop (HITL) review before use, and traceability of significant AI-assisted decisions/outputs.
Which approach best balances ethical, efficient, and effective AI use with the required governance controls?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The best option uses AI for high-value assistance (drafting and structuring common project artifacts) while applying proportional governance controls: approved tools, protected data handling, transparency, and HITL review. It also ensures accountability by requiring review/approval before the AI-assisted outputs are baselined and used to guide the project.
In AI-assisted projects, common uses include drafting foundational documents (charter/business case sections), generating initial risk registers, mapping stakeholders, and producing communication templates. Because AI outputs can be non-deterministic and may introduce errors or bias, AIPGF expects controls that preserve trust and accountability.
A balanced approach in the Foundation stage is to:
This achieves efficiency without removing essential safeguards needed for reliable governance.
It enables rapid drafting while maintaining transparency, data protection, HITL assurance, and auditable approval/traceability for key project artifacts.
Topic: Foundations of AI Project Governance (AIPGF)
A programme is delivering a new internal HR portal. The team plans to use a generative AI tool to (1) draft weekly status reports and update the RAID log for the programme manager, and (2) draft employee onboarding microlearning text that will be published in the portal as part of the solution.
The sponsor asks for evidence that the team has correctly distinguished AI assistance in project management from AI used in project deliverables, and that appropriate governance can be applied before moving into Activation.
Which artifact best validates this readiness in context?
Best answer: B
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: The most direct validation is an artifact that records intended AI use-cases and separates AI used to support project management from AI that contributes to the solution’s deliverables. An AI Assistance Plan provides this classification and ties it to Human-in-the-Loop review and decision/approval points, demonstrating readiness to govern both kinds of AI use appropriately.
An AI-assisted project/programme uses AI tools to support project/programme management activities (e.g., drafting reports, updating logs, planning) while keeping accountability with humans. In contrast, AI used in project deliverables means AI-generated or AI-driven content/components are incorporated into what the project delivers to users (e.g., published training content, requirements embedded in the product, AI functionality).
The artifact that best validates the distinction is one that:
That enables proportional governance: lighter controls for PM assistance and stronger validation/assurance when AI influences deliverables.
It explicitly documents where AI supports management versus where AI outputs become deliverables, enabling proportional controls and approvals.
Topic: Foundations of AI Project Governance (AIPGF)
Which statement best defines an AI-assisted project/programme in the APMG AI Project Governance Framework (AIPGF)?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: In AIPGF, “AI-assisted” refers to using AI to support the work of delivering a project or programme—either by assisting project management tasks (planning, reporting, decision support) and/or by helping generate project outputs. The key distinction is that AI is providing assistance within delivery activities, and human accountability for approvals remains in place.
An AI-assisted project/programme is defined by how AI is used during delivery, not by whether the final product contains AI. Under AIPGF, AI assistance can apply to:
This differs from a project whose purpose is to deliver an AI-enabled operational capability (an “AI product/system” outcome). AIPGF focuses governance on the risks introduced when AI contributes to PM decisions or project artefacts, while keeping human decision rights and approval responsibilities clear (HITL).
AIPGF uses “AI-assisted” for AI supporting PM activities and/or producing project artefacts/deliverables, without implying the project must deliver AI as the end product.
Topic: Foundations of AI Project Governance (AIPGF)
A PMO is piloting a generative AI assistant to draft weekly project status summaries from already-approved schedule and cost data. Outputs will be checked and edited by the project manager before publishing (HITL). No personal data or client-confidential information will be used, and the pilot will run for 6 weeks across three small internal projects. The organisation’s AI adoption maturity is low (first structured use).
Which governance approach SHOULD AVOID when scaling AIPGF governance to this pilot?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: AIPGF expects governance depth to be proportionate to size, complexity, risk, and AI maturity. In a short, low-risk internal pilot with clear HITL review and non-sensitive inputs, governance should be lightweight but explicit. Requiring heavy, production-grade assurance before any pilot learning is an over-governance anti-pattern.
Scalable AI project governance tailors controls to the context rather than applying a one-size-fits-all regime. Here, the AI use is limited (drafting summaries), inputs are already-approved project data, and a human remains accountable for the final output (HITL). That supports a light Foundation/Activation setup (clear purpose, boundaries, decision rights, and basic risk controls) plus a planned Evaluation checkpoint to decide whether stronger assurance is needed.
Over-governing happens when high-cost, high-latency controls intended for high-risk or externally facing AI (e.g., independent audits and continuous monitoring) are imposed on a low-risk, time-boxed pilot, reducing learning and adoption without proportionate risk reduction. The key takeaway is to start with minimum sufficient governance and define escalation triggers as risk changes.
This imposes high-assurance controls disproportionate to a low-risk, short pilot with HITL review.
Topic: Foundations of AI Project Governance (AIPGF)
A small internal project team wants to use a generative AI tool to speed up production of meeting minutes and a weekly RAID summary. In a Foundation workshop, they agreed the AI will only create drafts and a human will approve anything shared outside the team. The information security lead has added one constraint: “No personal data or client-identifiable details may be entered into the tool.”
The project manager now wants to start using the tool in the next sprint.
What is the best next step in the AIPGF flow?
Best answer: C
What this tests: Foundations of AI Project Governance (AIPGF)
Explanation: Before Activation, governance must be made operational in a way that fits the project’s size and risk. Here, the key tailoring is to use a lightweight AI Assistance Plan that translates the agreed constraints into clear decision rights, HITL review steps, and data-handling rules. Getting this approved establishes accountability for AI-assisted outputs before the team starts using the tool.
The core concept is proportionate governance with clear HITL and accountability before AI use moves from intent to execution. Even in a small project, the agreed constraint (“no personal data”) and the plan to share outputs beyond the team mean you need an explicit, approved way of working.
A lightweight AI Assistance Plan is the right Foundation artifact to bridge into Activation because it:
Starting work first pushes controls into hindsight, and Evaluation/benefits activities are sequenced after controlled use is established.
This is the proportional Foundation-to-Activation gate that preserves accountability and operationalises HITL and data constraints before use.
Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.