AIPGF Foundation: Foundations of AI Project Governance (AIPGF)

Try 10 focused AIPGF Foundation questions on Foundations of AI Project Governance (AIPGF), with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Foundation
Topic areaFoundations of AI Project Governance (AIPGF)
Blueprint weight12%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Foundations of AI Project Governance (AIPGF) for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 12% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Foundations of AI Project Governance (AIPGF)

A PMO has been piloting a generative AI assistant to draft weekly project status reports using internal data. A near-miss occurred when a user pasted client-sensitive content into a prompt, and the draft report also included unverified statements.

The steering committee will only scale the AI assistant if they can reduce operational risk, stay aligned to organisational policies, and maintain stakeholder confidence. What is the best next step in the AIPGF flow?

  • A. Roll out the AI assistant to all project teams with quick-start guidance
  • B. Run an Evaluation stage review to quantify productivity benefits from the pilot
  • C. Start the Foundation stage to agree drivers and approve an AI Assistance Plan
  • D. Allow ad hoc use and add governance controls only if further incidents occur

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Because the decision to scale depends on risk mitigation, policy alignment, and stakeholder trust, the programme should first formalise governance in the Foundation stage. That means agreeing the rationale for adopting AIPGF, defining scope and decision rights for AI-assisted outputs, and producing an approved AI Assistance Plan before wider Activation.

The primary drivers for adopting AIPGF are to reduce AI-related uncertainty and risk (e.g., confidentiality leaks and hallucinations), align delivery with organisational policies and assurance expectations, and strengthen stakeholder trust so AI-assisted outputs can be used with confidence. In this scenario, scaling is explicitly conditional on those drivers, so the next step is to complete Foundation work: define the intended AI assistance, responsibilities and approvals (including HITL), and document controls and evidence in an AI Assistance Plan ready to operationalise in Activation. Only after that foundation is agreed should the organisation expand use, monitor, and later evaluate benefits and lessons learned.

Key takeaway: do not scale or “prove value” further until governance prerequisites for safe and trusted use are in place.

Formalising the drivers and controls in Foundation provides the basis for risk mitigation, compliance alignment, and stakeholder trust before Activation.


Question 2

Topic: Foundations of AI Project Governance (AIPGF)

A project is run using an initiate/plan/deliver/close life cycle with formal gate reviews at the end of initiate and plan. Midway through delivery, the team starts using a generative AI tool to draft requirements and test scripts.

Symptoms now reported: no documented prompts or AI output assumptions, no HITL sign-off on AI-assisted deliverables, some team members used an unapproved tool despite an existing corporate approved-tool list, stakeholders say outputs are “untrustworthy,” and expected time savings are not being realised or tracked.

What is the most likely underlying cause?

  • A. The AI tool selection was poor, so benefits could not be achieved
  • B. AIPGF stages were not mapped into the project life cycle, so Foundation/Activation/Evaluation governance checkpoints were missed
  • C. Stakeholders are resistant to change and therefore distrust any AI-assisted deliverables
  • D. The team failed to document prompts and assumptions for AI-assisted outputs

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The pattern of issues across approvals, traceability, tool use, trust, and benefits indicates a systemic governance integration gap rather than a single execution mistake. In AIPGF, mapping Foundation, Activation, and Evaluation activities into the project’s initiate/plan/deliver/close stages ensures the right artefacts, decision rights, and HITL controls happen at defined points. When that mapping is missing, teams adopt AI ad hoc and bypass intended gate controls.

This is a governance integration failure: the project’s method has gate reviews, but AIPGF was not embedded into them, so AI use began in delivery without the expected governance “hooks.” A practical mapping is:

  • Foundation -3 Initiate/Plan: define AI purpose, decision rights, HITL approval points, and create/approve the AI Assistance Plan
  • Activation -3 Deliver: controlled use of approved tools, prompt/output traceability, reviews and sign-offs for AI-assisted deliverables
  • Evaluation -3 Close (and post-close): assess outcomes, benefits realisation, incidents/lessons, and decide whether to continue/scale

Because these checkpoints weren’t mapped, symptoms appear across multiple control areas at once, not just in documentation or tooling.

Without mapping AIPGF to initiate/plan/deliver/close, the project never embedded required AI planning, approval, and evaluation controls into its gate reviews and work processes.


Question 3

Topic: Foundations of AI Project Governance (AIPGF)

In the APMG AI Project Governance Framework (AIPGF), which statement best describes a typical exit criterion for moving from the Foundation stage to the Activation stage in an AI-assisted project?

  • A. The AI tool has been configured and deployed to the whole project team
  • B. AI Assistance Plan approved, with HITL, risks, and decision rights defined
  • C. AI-generated deliverables have been accepted and benefits are already realized
  • D. Project charter approved and the delivery method (Agile/waterfall) selected

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Progressing from Foundation to Activation requires evidence that AI use is ready to be applied under control. A typical exit criterion is an approved AI Assistance Plan that defines the intended AI use, the HITL approach, key risks/controls, and who is accountable for approving AI-assisted outputs. This ensures Activation begins with clear governance rather than ad hoc tool use.

AIPGF stage progression should be gated by practical entry/exit criteria that show governance readiness. The Foundation stage establishes how AI will be used and controlled; the Activation stage is where AI assistance is actually applied to produce project work products. Therefore, a typical Foundation exit criterion is approval of an AI Assistance Plan (or equivalent) that sets out the use cases, defines Human-in-the-Loop checkpoints, records decision rights/accountable approvers for AI-assisted outputs, and identifies key risks with agreed controls. This provides transparency and proportional assurance before wider use begins. Merely selecting a delivery method, deploying a tool, or claiming benefits is not sufficient evidence that AI-assisted work will be governed and auditable.

Activation should start only after the AI use is governed: planned, risk-assessed, and accountabilities agreed.


Question 4

Topic: Foundations of AI Project Governance (AIPGF)

A small internal project (6 weeks, 4-person team) is producing a communications pack for a system change. The team wants to use an organisation-approved generative AI tool to draft staff emails and meeting minutes using only non-sensitive internal context. Outputs will be sent to ~500 employees, and the sponsor wants “lightweight” governance but insists on clear accountability.

Which action SHOULD AVOID when tailoring AIPGF activities and artifacts for this project?

  • A. Auto-distribute AI-generated communications without pre-release human approval
  • B. Assign an accountable human approver for each AI-assisted deliverable
  • C. Add one AI-specific risk entry with proportionate controls and ownership
  • D. Create a short AI Assistance Plan defining allowed data, uses, and reviews

Best answer: A

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: AIPGF tailoring means scaling governance to the context while preserving Human-in-the-Loop and clear decision rights. Even in a small, low-sensitivity project, communications to a large internal audience can create trust and reputational impacts. Lightweight artifacts and controls are appropriate, but pre-release human approval must remain explicit and accountable.

The core tailoring concept in AIPGF is proportional governance: keep activities and artifacts as light as the project allows, but do not dilute HITL or accountability for AI-assisted outputs. In this scenario, the project is small and uses non-sensitive data, so a concise AI Assistance Plan and a single risk entry can be sufficient evidence of controlled AI use. However, because the AI drafts content that will be distributed widely, there must be an identified human who is accountable for approving the final versions before release.

Practical, proportionate tailoring here includes:

  • Documenting permitted AI use and data boundaries (AI Assistance Plan)
  • Logging key decisions/assumptions for auditability (lightweight)
  • Recording one consolidated AI risk with controls, owner, and review approach
  • Requiring human sign-off before distribution to maintain HITL

Automation may support drafting and formatting, but release decisions must remain human-owned.

It removes HITL and weakens accountability by releasing AI outputs before a named human approves them.


Question 5

Topic: Foundations of AI Project Governance (AIPGF)

In the APMG AI Project Governance Framework (AIPGF), which definition best describes an AI Assistance Plan?

  • A. A governance artifact that defines and controls how AI may be used in the project, including data boundaries, decision rights/approvals (e.g., HITL), and assurance needs
  • B. A complete replacement for the project management plan, covering scope, schedule, cost, and resources end to end
  • C. A mandatory regulator-facing checklist whose main purpose is to prove legal compliance for any AI use
  • D. A detailed technical specification for building, training, and tuning the AI model used by the project

Best answer: A

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: An AI Assistance Plan is structured governance: it clarifies what AI can be used for, under what constraints, and who remains accountable for AI-assisted outputs. By setting decision rights, data boundaries, and assurance expectations, it keeps governance focused on controlled AI use and stakeholder trust rather than bureaucracy.

In AIPGF, an AI Assistance Plan is a practical governance artifact used to integrate AI into delivery in a controlled, scalable way. It makes AI usage transparent and accountable by documenting where AI will support work (e.g., drafting deliverables), what inputs are permitted (privacy/IP boundaries), what controls apply (such as HITL review and evidence retention), and who has approval authority for AI-assisted outputs. This turns “AI use” into governed project activity with clear outcomes: reduced unmanaged risk, higher trust, and auditable decisions. It is not meant to over-document; it is tailored proportionately to the project’s risk and complexity so governance remains outcomes-focused rather than bureaucratic.

It is an outcomes-focused control document that makes AI use explicit, accountable, and auditable rather than adding unnecessary process.


Question 6

Topic: Foundations of AI Project Governance (AIPGF)

You are the PM for an internal transformation project. The team wants to use a generative AI tool to speed up early-stage deliverables: a draft project charter, an initial risk list, a stakeholder map, and a first set of status-update templates.

Your organisation’s AI governance requires: use of approved tools only, no client/personally identifiable data in prompts, Human-in-the-Loop (HITL) review before use, and traceability of significant AI-assisted decisions/outputs.

Which approach best balances ethical, efficient, and effective AI use with the required governance controls?

  • A. Let AI generate the charter, risks, and stakeholder map end-to-end, and only review them during the next stage-gate to save time now.
  • B. Use a public AI tool with full project details to get the most accurate charter and stakeholder map, then store the final versions in the project repository.
  • C. Use an approved AI tool with sanitized inputs, document intended uses in an AI Assistance Plan, label outputs as AI-assisted, and have the sponsor/team review and approve before baselining.
  • D. Avoid AI for all project management deliverables until the project is in delivery, so governance risks are eliminated in the Foundation stage.

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The best option uses AI for high-value assistance (drafting and structuring common project artifacts) while applying proportional governance controls: approved tools, protected data handling, transparency, and HITL review. It also ensures accountability by requiring review/approval before the AI-assisted outputs are baselined and used to guide the project.

In AI-assisted projects, common uses include drafting foundational documents (charter/business case sections), generating initial risk registers, mapping stakeholders, and producing communication templates. Because AI outputs can be non-deterministic and may introduce errors or bias, AIPGF expects controls that preserve trust and accountability.

A balanced approach in the Foundation stage is to:

  • Use only approved AI tools and sanitize/anonymize prompt inputs.
  • Record the intended use, boundaries, and review process in an AI Assistance Plan.
  • Apply HITL review and appropriate approvals (e.g., sponsor approval for the charter) before baselining.
  • Maintain transparency by marking AI-assisted content and keeping traceability for significant outputs/decisions.

This achieves efficiency without removing essential safeguards needed for reliable governance.

It enables rapid drafting while maintaining transparency, data protection, HITL assurance, and auditable approval/traceability for key project artifacts.


Question 7

Topic: Foundations of AI Project Governance (AIPGF)

A programme is delivering a new internal HR portal. The team plans to use a generative AI tool to (1) draft weekly status reports and update the RAID log for the programme manager, and (2) draft employee onboarding microlearning text that will be published in the portal as part of the solution.

The sponsor asks for evidence that the team has correctly distinguished AI assistance in project management from AI used in project deliverables, and that appropriate governance can be applied before moving into Activation.

Which artifact best validates this readiness in context?

  • A. Benefits tracker showing time saved from AI-generated status reporting
  • B. AI Assistance Plan with an inventory classifying AI uses as “PM support” vs “deliverable content,” with HITL approval points
  • C. RACI chart for programme reporting and stakeholder communications
  • D. AIPG-CMM assessment summary showing overall AI governance maturity level

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The most direct validation is an artifact that records intended AI use-cases and separates AI used to support project management from AI that contributes to the solution’s deliverables. An AI Assistance Plan provides this classification and ties it to Human-in-the-Loop review and decision/approval points, demonstrating readiness to govern both kinds of AI use appropriately.

An AI-assisted project/programme uses AI tools to support project/programme management activities (e.g., drafting reports, updating logs, planning) while keeping accountability with humans. In contrast, AI used in project deliverables means AI-generated or AI-driven content/components are incorporated into what the project delivers to users (e.g., published training content, requirements embedded in the product, AI functionality).

The artifact that best validates the distinction is one that:

  • lists each AI use-case and where it is applied
  • classifies whether the AI output is for PM work or becomes part of a deliverable
  • defines HITL checks, approvers, and what evidence is retained

That enables proportional governance: lighter controls for PM assistance and stronger validation/assurance when AI influences deliverables.

It explicitly documents where AI supports management versus where AI outputs become deliverables, enabling proportional controls and approvals.


Question 8

Topic: Foundations of AI Project Governance (AIPGF)

Which statement best defines an AI-assisted project/programme in the APMG AI Project Governance Framework (AIPGF)?

  • A. A project/programme where AI is used only to generate project deliverables and is not used in project management activities
  • B. A project/programme where AI is limited to administrative scheduling and cannot influence any deliverables
  • C. A project/programme where AI tools assist project management and/or help create project outputs, with humans retaining accountability for decisions and approvals
  • D. A project/programme whose end product is an AI-enabled system delivered into operations

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: In AIPGF, “AI-assisted” refers to using AI to support the work of delivering a project or programme—either by assisting project management tasks (planning, reporting, decision support) and/or by helping generate project outputs. The key distinction is that AI is providing assistance within delivery activities, and human accountability for approvals remains in place.

An AI-assisted project/programme is defined by how AI is used during delivery, not by whether the final product contains AI. Under AIPGF, AI assistance can apply to:

  • Project management work (e.g., drafting plans/status reports, analysing risks, supporting estimates/decisions)
  • Creation of project outputs/deliverables (e.g., drafting requirements, test cases, comms, designs)

This differs from a project whose purpose is to deliver an AI-enabled operational capability (an “AI product/system” outcome). AIPGF focuses governance on the risks introduced when AI contributes to PM decisions or project artefacts, while keeping human decision rights and approval responsibilities clear (HITL).

AIPGF uses “AI-assisted” for AI supporting PM activities and/or producing project artefacts/deliverables, without implying the project must deliver AI as the end product.


Question 9

Topic: Foundations of AI Project Governance (AIPGF)

A PMO is piloting a generative AI assistant to draft weekly project status summaries from already-approved schedule and cost data. Outputs will be checked and edited by the project manager before publishing (HITL). No personal data or client-confidential information will be used, and the pilot will run for 6 weeks across three small internal projects. The organisation’s AI adoption maturity is low (first structured use).

Which governance approach SHOULD AVOID when scaling AIPGF governance to this pilot?

  • A. Time-boxing an Evaluation-stage checkpoint to review issues, benefits, and whether to scale up controls
  • B. Adding a simple risk register entry with triggers to escalate governance if scope or data sensitivity increases
  • C. Mandating a full independent model audit and continuous production monitoring before any pilot use
  • D. Creating a lightweight AI Assistance Plan defining data sources, HITL checks, and approvers

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: AIPGF expects governance depth to be proportionate to size, complexity, risk, and AI maturity. In a short, low-risk internal pilot with clear HITL review and non-sensitive inputs, governance should be lightweight but explicit. Requiring heavy, production-grade assurance before any pilot learning is an over-governance anti-pattern.

Scalable AI project governance tailors controls to the context rather than applying a one-size-fits-all regime. Here, the AI use is limited (drafting summaries), inputs are already-approved project data, and a human remains accountable for the final output (HITL). That supports a light Foundation/Activation setup (clear purpose, boundaries, decision rights, and basic risk controls) plus a planned Evaluation checkpoint to decide whether stronger assurance is needed.

Over-governing happens when high-cost, high-latency controls intended for high-risk or externally facing AI (e.g., independent audits and continuous monitoring) are imposed on a low-risk, time-boxed pilot, reducing learning and adoption without proportionate risk reduction. The key takeaway is to start with minimum sufficient governance and define escalation triggers as risk changes.

This imposes high-assurance controls disproportionate to a low-risk, short pilot with HITL review.


Question 10

Topic: Foundations of AI Project Governance (AIPGF)

A small internal project team wants to use a generative AI tool to speed up production of meeting minutes and a weekly RAID summary. In a Foundation workshop, they agreed the AI will only create drafts and a human will approve anything shared outside the team. The information security lead has added one constraint: “No personal data or client-identifiable details may be entered into the tool.”

The project manager now wants to start using the tool in the next sprint.

What is the best next step in the AIPGF flow?

  • A. Request sponsor sign-off on claimed time savings and benefits realisation targets
  • B. Start using the tool with anonymised inputs and capture issues after the sprint review
  • C. Finalize and obtain approval of a lightweight AI Assistance Plan defining HITL checks, allowed data, and accountable approvers
  • D. Run an Evaluation and lessons-learned session after the first set of minutes is produced

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Before Activation, governance must be made operational in a way that fits the project’s size and risk. Here, the key tailoring is to use a lightweight AI Assistance Plan that translates the agreed constraints into clear decision rights, HITL review steps, and data-handling rules. Getting this approved establishes accountability for AI-assisted outputs before the team starts using the tool.

The core concept is proportionate governance with clear HITL and accountability before AI use moves from intent to execution. Even in a small project, the agreed constraint (“no personal data”) and the plan to share outputs beyond the team mean you need an explicit, approved way of working.

A lightweight AI Assistance Plan is the right Foundation artifact to bridge into Activation because it:

  • sets who is accountable for each AI-assisted deliverable
  • defines HITL review/approval checkpoints before distribution
  • codifies what data can/cannot be entered and how prompts/outputs are handled
  • specifies minimal logging (e.g., decision log entries for significant use)

Starting work first pushes controls into hindsight, and Evaluation/benefits activities are sequenced after controlled use is established.

This is the proportional Foundation-to-Activation gate that preserves accountability and operationalises HITL and data constraints before use.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026