AIPGF Practitioner: Foundations of AI Project Governance (AIPGF)

Try 10 focused AIPGF Practitioner questions on Foundations of AI Project Governance (AIPGF), with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Practitioner
Topic areaFoundations of AI Project Governance (AIPGF)
Blueprint weight12%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Foundations of AI Project Governance (AIPGF) for AIPGF Practitioner. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 12% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Foundations of AI Project Governance (AIPGF)

A city agency is piloting a vendor GenAI tool that drafts benefit-eligibility case summaries for reviewers. A previous automation incident has left community stakeholders skeptical, and internal audit has stated that any AI-assisted decision must be traceable to a named accountable person with an evidence trail showing what the AI produced and what the human approved.

What is the BEST AIPGF governance action to improve stakeholder trust through transparency and accountability?

  • A. Run additional accuracy testing before expanding the pilot
  • B. Rely on the vendor’s assurances and SOC reports for audit needs
  • C. Baseline an AI Assistance Plan with RACI and audit-trail evidence
  • D. Increase automation to remove subjective human variation

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Stakeholder trust is low and auditability is strict, so governance must make AI use visible and assign clear accountability for AI-assisted outputs and decisions. An AI Assistance Plan, backed by defined roles and required decision/audit logs, creates the transparency and traceability stakeholders and auditors need. This directly operationalizes AIPGF’s transparency and accountability expectations.

When stakeholder trust is fragile, the governance gap is usually not “more AI performance,” but missing transparency and unclear accountabilities. In AIPGF terms, the most effective control is to formalize how AI will be used and how decisions remain owned by humans.

Baselining an AI Assistance Plan addresses this by:

  • Defining where AI is used (and where it is prohibited)
  • Assigning decision rights and accountability (who signs off and who escalates)
  • Requiring evidence for auditability (e.g., decision logs capturing AI output, human edits/approval, and rationale)

This creates an inspectable trail that supports assurance and aligns delivery practices to governance standards, which is what auditors and skeptical stakeholders are asking for.

It makes AI use explicit and assigns decision accountability while requiring auditable records that can be shared and inspected.


Question 2

Topic: Foundations of AI Project Governance (AIPGF)

A product team wants to start using a GenAI assistant to draft responses for a customer support center next sprint. The sponsor asks you to “fast-track” the project from the AIPGF Foundation stage into Activation, but the request includes only a high-level description (“it will speed up replies”) and no governance artifacts.

To determine the correct entry/exit criteria for the stage transition (and how scalable the governance needs to be), what should you verify/ask for first?

  • A. The intended use and AI risk tier (decision impact, users, and harm severity)
  • B. The expected productivity uplift and how it will be reported to the sponsor
  • C. The planned sprint cadence and which agile ceremonies will include AI updates
  • D. The preferred model type and prompt patterns the team plans to use

Best answer: A

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Stage progression in AIPGF is gated by evidence that is proportional to the AI’s risk and intended use. Before you can judge whether Foundation exit criteria are met (and what controls are required in Activation), you must first classify the use case and its risk tier based on decision impact, affected users, and potential harms.

In AIPGF, moving between stages (such as Foundation to Activation) depends on meeting stage-specific entry/exit criteria, and those criteria must be scalable to the context. The first clarifying step is to confirm what the AI will do in practice and how risky it is (e.g., whether outputs influence customer decisions, operational actions, or regulated commitments; who is affected; and the plausible harm severity). That risk-tier view drives which Foundation artifacts and approvals are mandatory before Activation, such as an AI Assistance Plan scope, human decision rights/HITL expectations, acceptance criteria for AI-assisted outputs, data rights/privacy checks, and required logging/assurance evidence. Without the intended use and risk tier, any “go/no-go” decision or evidence list would be arbitrary.

The key takeaway is that stage-gate criteria are set by intended use and risk, not by delivery cadence or technical preferences.

Risk tier and intended decision impact determine which Foundation exit evidence and Activation entry criteria must be met before proceeding.


Question 3

Topic: Foundations of AI Project Governance (AIPGF)

A retail bank is about to activate a GenAI assistant that drafts customer “adverse action” letters. The project is rated high-risk because letters are externally issued and errors could cause customer harm.

Internal Audit will only approve go-live if the team can demonstrate that the Human-in-the-Loop (HITL) rule is enforced: a qualified human must review and approve the final letter before it is sent, and there is a clear escalation path for uncertain cases.

Which artifact/evidence best validates readiness and effectiveness of this mandatory HITL control?

  • A. AIPG-CMM assessment showing improved maturity since last quarter
  • B. Benefits tracker showing reduced drafting time and fewer rework cycles
  • C. Risk register entry listing “letter errors” with proposed mitigations
  • D. Approved AI Assistance Plan with documented HITL checkpoints and approvers

Best answer: D

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The strongest validation is evidence that the process control is designed and adoptable before activation. An AI Assistance Plan is the governance artifact that specifies mandatory HITL review/approval points, accountable approvers, and the operational criteria (workflow, escalation, records) needed to prove the control will prevent unsupervised external release.

In AIPGF, HITL is mandatory when AI output could materially affect people, decisions, or external communications, especially in high-risk contexts. Readiness evidence should therefore show an enforceable control: where human review is required, who is authorized to approve, what happens when confidence/clarity is low, and how the approval is recorded for audit.

An approved AI Assistance Plan is purpose-built for this: it defines permitted use, mandatory review/approval gates (e.g., “no external letter without sign-off”), roles/decision rights, escalation rules, and the records to retain (e.g., approval logs). A risk register or maturity score can support governance, but they do not by themselves prove that the mandatory HITL control is implemented and will operate at go-live.

It explicitly defines where human approval is mandatory, who has decision rights, and the go/no-go evidence that the control is in place.


Question 4

Topic: Foundations of AI Project Governance (AIPGF)

A project is required to demonstrate an auditable trail showing who made key AI-related decisions (including approvals at governance gates), what was decided, when, and the supporting rationale. In AIPGF terms, which project artifact most directly satisfies this governance requirement?

  • A. Model card
  • B. Decision log
  • C. Risk log
  • D. Benefits tracker

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The requirement is an audit-ready record of decisions, approvals, timing, and rationale. In AIPGF, the decision log is the artifact intended to capture and evidence governance decisions (including gate outcomes) with clear accountability. This directly supports transparency and audit trail expectations.

A high-level governance requirement for assurance and auditability needs a concrete artifact that captures decision evidence. In AIPGF, a decision log is used to record significant decisions (including go/no-go gate outcomes), who had the decision rights, who approved, when the decision was made, and why (including the evidence consulted). This creates a traceable audit trail across the project lifecycle and supports standards alignment by making governance actions demonstrable rather than implicit. A risk log and benefits tracker are important governance tools, but they do not, by themselves, provide a complete record of approvals and rationales for key decisions. A model card documents model characteristics and intended use, not organizational decision-making and approvals.

A decision log records key decisions, decision rights/approvals, timestamps, and rationale to support auditability.


Question 5

Topic: Foundations of AI Project Governance (AIPGF)

A programme is launching a customer-service portal rebuild. The sponsor says, “We’ll use GenAI to speed up delivery,” and the shortlisted vendor also mentions an “AI-powered virtual agent” as an optional feature. You have been asked to confirm whether this is an AI-assisted programme under AIPGF and what falls in scope.

What is the FIRST clarifying question you should ask?

  • A. What is the programme’s risk tier and which governance gate owners have already approved it?
  • B. What acceptance criteria will be used to sign off AI-generated requirements and plans?
  • C. What data rights do we have to fine-tune a model using historic customer conversations?
  • D. Will any AI be embedded in the delivered portal (or influence operational decisions), or is AI only being used to assist the team’s project management work products?

Best answer: D

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The first step is to establish what “using GenAI” means in this context: AI supporting project management activities versus AI included in the delivered product/service. That distinction drives whether the programme is AI-assisted in governance terms and what controls and evidence are needed for delivery versus internal productivity use.

To define an AI-assisted project/programme under AIPGF, you must first clarify where AI is being used and what it affects. AI assistance in project management covers using AI to help create or analyze management artifacts (e.g., drafts of requirements, plans, status reports) while humans retain decision accountability and the deliverable itself is non-AI. AI used in project deliverables means an AI capability is part of what is delivered or materially influences operational/user outcomes (e.g., an in-portal virtual agent), which typically increases governance scope and assurance needs.

In this scenario, “use GenAI to speed up delivery” and an “AI-powered virtual agent” could mean very different things. Confirming whether AI is embedded in the portal (or influences real-world decisions) versus only assisting the team is the necessary first clarification; other questions are downstream once that boundary is known.

This determines whether AI is part of the deliverable versus only assisting project management, which is the key distinction for defining an AI-assisted programme’s scope.


Question 6

Topic: Foundations of AI Project Governance (AIPGF)

In the APMG AI Project Governance Framework (AIPGF), which term/principle most directly means that the depth of governance (controls, evidence, and approvals) should be tailored to the project’s size, complexity, risk tier, and AI adoption maturity to avoid over-governing or under-governing?

  • A. Evaluation stage
  • B. Human-Centricity
  • C. Transparency
  • D. Adaptability

Best answer: D

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: Adaptability in AIPGF focuses on tailoring governance to context. It supports scaling gates, artifacts, and assurance activities so they are proportionate to the project’s risk, complexity, and organizational AI maturity, helping teams avoid both excessive bureaucracy and insufficient control.

The core concept is proportionate, risk-aware tailoring of governance. In AIPGF, this is captured by the Adaptability principle: governance is not “one size fits all,” so the amount of control and evidence should scale with factors such as project size, solution complexity, risk tier/criticality, and AI adoption maturity.

Applied in practice, Adaptability typically means:

  • Lighter touch for low-risk, well-understood use cases (fewer gates, simpler evidence)
  • Deeper assurance for high-impact or novel AI (stronger approvals, more documentation, tighter monitoring)
  • Adjusting governance as learning emerges across the life cycle

This is different from being open about decisions (Transparency) or prioritizing human outcomes (Human-Centricity).

It requires proportionate, context-aware tailoring of governance so oversight matches risk, complexity, and maturity.


Question 7

Topic: Foundations of AI Project Governance (AIPGF)

A retail bank is piloting GenAI to summarize customer support transcripts and suggest credit-limit changes for relationship managers. The pilot is rated high risk because it handles customer PII and influences decision-critical outcomes.

At the Activation go/no-go gate, the sponsor asks for evidence that the team has identified tasks where AI assistance is inappropriate (e.g., handling confidential data in unapproved tools or making/automating credit decisions) and has put enforceable controls in place.

Which artifact best validates this readiness and control effectiveness?

  • A. Benefits Tracker showing cycle-time reduction and improved customer satisfaction
  • B. Approved AI Assistance Plan documenting prohibited uses, data handling rules, and HITL approvals
  • C. AIPG-CMM assessment results showing overall AI governance maturity level
  • D. RACI matrix listing roles for model owner, compliance, and operations

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The strongest validation is evidence that explicitly defines and enforces boundaries for AI use in a high-risk, confidential, decision-critical context. An AI Assistance Plan captures permitted vs. prohibited tasks, data classification constraints, approved tools, and required Human-in-the-Loop reviews and sign-offs. That directly demonstrates the team has determined where AI assistance is inappropriate and has operationalized the controls before go-live.

When confidentiality, safety, or decision-criticality is high, governance must make AI-use boundaries explicit and auditable before activation. The most defensible evidence is an AI Assistance Plan that ties the project’s tasks and data types to clear rules such as “no customer PII in unapproved AI tools” and “AI may assist drafting/summarising, but humans make and record the final credit decision,” with named approvals and HITL checkpoints.

A fit-for-purpose AI Assistance Plan typically includes:

  • Data classification and handling constraints for each input/output
  • Permitted and prohibited AI-assisted activities (especially decision-critical tasks)
  • Tooling boundaries (approved environments/models) and escalation paths
  • Human accountability and review requirements for AI-assisted outputs

High-level maturity scores or role charts can support governance, but they do not prove the project has implemented these task- and data-specific restrictions.

It directly evidences where AI must not be used due to confidentiality/decision-criticality and shows the agreed controls and human accountabilities.


Question 8

Topic: Foundations of AI Project Governance (AIPGF)

In AIPGF terminology, what is the term for ensuring a named person reviews AI-assisted outputs and remains accountable for the final decision before the output is used operationally?

  • A. Transparency
  • B. Human-in-the-Loop (HITL)
  • C. AI Assistance Plan
  • D. Full automation

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: This describes Human-in-the-Loop (HITL), where a human is deliberately kept in the decision flow to review AI-assisted content and take accountability for what is ultimately approved and used. It directly addresses common governance failure modes such as unreviewed outputs and overreliance on AI recommendations.

Human-in-the-Loop (HITL) is an AIPGF governance control pattern that keeps a responsible human actively involved in reviewing, challenging, and approving AI-assisted outputs before they are used in deliverables or operational decisions. In practice, HITL clarifies decision rights and accountability (who signs off), reduces the risk of unreviewed or low-quality AI content being propagated, and mitigates automation bias (treating AI outputs as inherently correct). HITL is especially important where AI outputs could affect customers, staff, financial outcomes, or compliance, and it should be proportionate to the project’s risk tier and context. The key takeaway is that HITL is about mandatory human review and retained accountability, not merely disclosing AI use or documenting a plan.

HITL establishes required human review and retained accountability to prevent unreviewed or over-relied-on AI outputs.


Question 9

Topic: Foundations of AI Project Governance (AIPGF)

Your organization is already implementing an ISO/IEC 42001 AI management system (AIMS) at enterprise level. A new GenAI-assisted claims triage project is entering Activation next week and has been rated high-risk.

Constraints:

  • HITL is mandatory: only a human can approve claim denials
  • An internal audit in 3 months requires end-to-end auditability of AI use and decisions
  • Go-live is in 6 weeks
  • The vendor will not provide model internals; only prompts, outputs, and configuration can be logged

What is the BEST next governance action to ensure AIPGF complements (not replaces) ISO/IEC 42001 in this project environment?

  • A. Tailor AIPGF gates/artifacts and map evidence to ISO 42001 controls
  • B. Replace ISO 42001 procedures with AIPGF checklists to save time
  • C. Proceed using ISO 42001 certification as the only assurance
  • D. Delay Activation until the enterprise AIMS is rewritten for GenAI

Best answer: A

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: ISO/IEC 42001 provides the enterprise AI management system, while AIPGF provides project-level lifecycle governance to implement and evidence those requirements in delivery. The best next step is to tailor AIPGF controls, roles, and artifacts for this high-risk, HITL scenario and explicitly map the resulting evidence to the organization’s ISO/IEC 42001 control set for auditability.

AIPGF complements ISO/IEC 42001 by translating enterprise AIMS requirements into stage-appropriate project governance: clear accountabilities, lifecycle gates (Foundation/Activation/Evaluation), and practical artifacts (plans, logs, assurance evidence) that demonstrate how the project conforms to the AIMS.

In this scenario, the project is high-risk, time-boxed, and must be auditable even without access to model internals. The next governance action is therefore to:

  • Define/tailor AIPGF Activation gate criteria for HITL operation and vendor/tool limits
  • Produce project artifacts (e.g., AI Assistance Plan, decision logs, risk log, monitoring plan)
  • Map those artifacts and approvals to the relevant ISO/IEC 42001 controls so audit evidence is traceable

Treating ISO/IEC 42001 as “already handled” leaves a project evidence gap, while replacing the AIMS breaks standards alignment.

This operationalizes ISO/IEC 42001 requirements through project-level AIPGF lifecycle gates, roles, and evidence without substituting the enterprise AIMS.


Question 10

Topic: Foundations of AI Project Governance (AIPGF)

A project team creates a document that lists which project activities will use AI tools (for example, drafting the business case/charter and generating risk candidates), the required Human-in-the-Loop reviews and sign-offs, data/prompt constraints, and how AI-assisted outputs will be recorded for audit.

In AIPGF, what is this artifact called?

  • A. Decision log
  • B. Risk register
  • C. Model card
  • D. AI Assistance Plan

Best answer: D

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: This describes an AIPGF artifact used to govern how AI supports day-to-day project work. It makes AI use explicit (what tasks, what constraints), defines HITL oversight, and specifies what evidence will be retained to support transparency and assurance.

The core concept is making AI assistance in project delivery explicit and governable. An AI Assistance Plan documents how AI tools will be used across project activities (such as drafting documents, analyzing risks, mapping stakeholders, or generating communications), what human checks are required before AI outputs are relied upon, and what constraints apply (for example, prohibited data types or required prompt/output retention). This supports the AIPGF principles of Human-Centricity and Transparency by clarifying accountability and ensuring AI-assisted work is reviewable and auditable.

Key contents typically include:

  • Intended AI-assisted tasks and boundaries
  • HITL review points and approvers
  • Data/prompt handling rules and evidence to retain

A nearby but different artifact is a decision log, which records decisions made, not the planned governance of AI assistance.

It defines where AI will be used in project work, the human oversight required, and the controls/evidence needed for transparent, auditable use.

Continue with full practice

Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Practitioner guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026