Free AIPGF Foundation Full-Length Practice Exam: 40 Questions

Try 40 free AIPGF Foundation questions across the exam domains, with answers and explanations, then continue in PM Mastery.

This free full-length AIPGF Foundation practice exam includes 40 original PM Mastery questions across the exam domains.

The questions are original PM Mastery practice questions aligned to the exam outline. They are not official exam questions and are not copied from any exam sponsor.

Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some exam sponsors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

For concept review before or after this set, use the AIPGF Foundation guide on PMExams.com.

How to run this diagnostic

Set a 40-minute timer and answer the 40 questions in one pass. Keep notes on whether each miss came from governance structure, lifecycle controls, accountabilities, responsible-AI principles, culture, or assurance evidence.

Suggested timing checkpoints:

Question rangeTarget elapsed time
1-1010 minutes
11-2020 minutes
21-3030 minutes
31-4040 minutes

Exam snapshot

ItemDetail
IssuerAPMG International
Exam routeAIPGF Foundation
Official exam nameAPMG AI Project Governance Framework (AIPGF) Foundation
Full-length set on this page40 questions
Exam time40 minutes
Topic areas represented8

Full-length exam mix

TopicApproximate official weightQuestions used
Foundations of AI Project Governance (AIPGF)12%5
AI in Projects and Organizations (Context)12%5
Framework Structure and Controls13%5
Roles, Responsibilities, and Accountabilities13%5
Principles for Responsible and Trustworthy AI13%5
Values, Behaviours, and Culture12%5
Lifecycle Governance (Initiate to Operate)13%5
Assurance, Metrics, and Continuous Improvement12%5

Practice questions

Questions 1-25

Question 1

Topic: Lifecycle Governance (Initiate to Operate)

A project has just closed (AIPGF Stage 3: Evaluation). The team used a generative AI assistant to draft status reports and meeting minutes under an AI Assistance Plan with HITL review and a small prompt library.

At closure, the project manager archives the deliverables but does not document or share lessons learned about AI tool selection, actual usage patterns, and how effective the governance controls were. The organisation plans to reuse the same AI tool on several projects next month.

What is the most likely near-term impact of this omission?

  • A. The vendor will terminate the licence for breach of intellectual property
  • B. All realised benefits from the closed project will be immediately reversed
  • C. The AI assistant will degrade in accuracy because it was not retrained
  • D. Portfolio assurance cannot evidence control effectiveness, delaying reuse approvals

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: In the Evaluation stage, documenting and sharing lessons learned is a key mechanism for demonstrating governance effectiveness and enabling safe scaling. If AI tool choices, usage, and control outcomes are not captured, the organisation lacks audit-ready evidence and repeatable guidance. Near term, this typically triggers additional assurance work and slows reuse, reducing trust and timely value realisation.

The core concept is Evaluation-stage learning for governance assurance: closing an AI-assisted project should leave a usable record of what was selected, how it was actually used, and whether controls (such as HITL review and data-handling rules) were effective. If these lessons are not documented and shared, stakeholders cannot confidently reuse the approach, because transparency and auditability are weakened.

In practice, the near-term effects are usually:

  • slower approvals to reuse the tool (extra reviews/re-testing)
  • reduced stakeholder trust in the governance process
  • repeated setup and avoidable risk exposure on the next projects

The closest tempting alternative is blaming technical model “degradation,” but the omission described is primarily a governance evidence gap, not a model lifecycle change.

Without captured lessons learned, there is little auditable evidence that controls worked, so assurance and governance are likely to require re-validation before reuse.


Question 2

Topic: Framework Structure and Controls

A delivery team is in the Activation stage and plans to use a generative AI tool to draft parts of a tender specification. The client’s assurance team requires strict auditability: they must be able to trace each AI-assisted clause back to what was input, what the AI produced, and who approved the final wording.

Which option is the most appropriate deliverable to produce in this stage to meet that requirement?

  • A. A benefits realisation review to confirm whether AI saved time
  • B. A stated aim to improve schedule performance through AI assistance
  • C. A workshop to train staff on effective prompt-writing techniques
  • D. An AI decision log capturing prompts, outputs, sources, and approvals

Best answer: D

What this tests: Framework Structure and Controls

Explanation: In Activation, governance is put into operation through concrete controls and artifacts. When strict auditability is the dominant requirement, the key deliverable is recorded evidence that links AI inputs and outputs to human decisions and approvals. A decision log provides the traceability and assurance trail auditors typically need.

The Activation stage focuses on implementing and operating AI governance so AI assistance can be used safely and credibly. A useful way to distinguish items within a stage is:

  • Aim: the outcome you want (e.g., efficiency, trust)
  • Activity: work you do (e.g., training, workshops)
  • Deliverable: an auditable artifact or control output (e.g., logs, plans, registers)

With strict auditability as the deciding factor, the most appropriate Activation-stage deliverable is an AI decision log (or equivalent record) that captures prompts/inputs, AI outputs, referenced sources where applicable, and HITL approvals for what is released. The key takeaway is that audits require evidence, not just intentions or meetings.

A decision log is a tangible Activation-stage artifact that provides end-to-end traceability and evidence for audit.


Question 3

Topic: Values, Behaviours, and Culture

A portfolio office is rolling out an AI assistant to draft weekly status reports and risk summaries for multiple projects. Some project managers are resisting because they feel AIPGF governance “slows delivery.”

The programme manager wants to maintain momentum while preserving controls (e.g., transparency of AI use, HITL review, and auditable decisions). Which action should they NOT take?

  • A. Update the AI Assistance Plan and decision log as adoption expands
  • B. Run a time-boxed pilot with HITL review and capture lessons
  • C. Let teams use the tool now and backfill approvals later
  • D. Pause all rollout until every project has completed training

Best answer: C

What this tests: Values, Behaviours, and Culture

Explanation: Maintaining momentum in AI-assisted delivery means removing friction without removing governance. The anti-pattern is treating approvals and accountability as after-the-fact paperwork, because it breaks transparency and weakens HITL assurance. Momentum should come from proportional controls (pilot, coaching, clear guidance), not bypassing them.

In AIPGF-aligned change management, resistance is addressed by making adoption easy and safe: provide clarity on intended AI use, keep Human-in-the-Loop review for AI-assisted deliverables, and ensure decisions are traceable. “Speed” that comes from skipping or delaying approvals is false momentum, because it creates rework, undermines stakeholder confidence, and reduces auditability when AI output quality (e.g., hallucinations) is challenged.

Practical actions that preserve controls while sustaining pace include:

  • Start with a pilot to prove value and tune controls proportionately.
  • Train and coach users so behaviours match Human-Centricity and Transparency.
  • Keep the AI Assistance Plan and decision log current as scope expands.

The key takeaway is to streamline governance steps, not postpone or bypass them.

Retroactively approving AI-assisted outputs weakens control, auditability, and trust even if it feels faster.


Question 4

Topic: Lifecycle Governance (Initiate to Operate)

A programme team plans to use a generative AI assistant to draft status reports and first-pass requirements. The sponsor asks what AIPGF Stage 2 (Activation) is intended to achieve and how it fits with delivery/execution.

Exhibit: AI Assistance Plan (excerpt)

Stage gate: Move from Foundation to Activation
- Use cases approved; data classification agreed
- HITL roles assigned (Author, Reviewer, Accountable approver)
- Prompt/templates placed under version control
- Logging enabled for prompts/outputs; audit sample plan set
- Team onboarding scheduled; start controlled use in Sprint 1

Which interpretation best matches the intent of Stage 2 (Activation)?

  • A. Operationalise agreed controls and begin controlled use in delivery
  • B. Define scope, principles, and risk appetite before selecting tools
  • C. Freeze AI ways of working to avoid changes during execution
  • D. Validate benefits realised and decide whether to scale or retire

Best answer: A

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Stage 2 (Activation) is where the team implements what was agreed in Foundation so AI-assisted work can start safely during delivery/execution. This includes enabling access, training, setting HITL approval routes, and putting monitoring/logging in place to support control and assurance as work products are created.

Activation is the “make it real” stage: it embeds the AI governance decisions from Foundation into the delivery/execution phase so teams can produce project outputs using AI under controlled conditions. The exhibit shows typical Activation outcomes—assigning HITL decision rights, placing prompts/templates under change control, enabling logging for auditability, and onboarding the team—so that AI use starts in sprints/work packages with oversight.

In delivery terms, Activation maps to executing and controlling work: the team begins using AI for approved use cases while applying the defined controls, evidence capture, and review/approval workflow. Evaluation comes later to assess performance, benefits, and whether to scale, adjust, or stop.

Activation turns Foundation decisions into day-to-day delivery controls, roles, and workflows so AI use can start under governance.


Question 5

Topic: Foundations of AI Project Governance (AIPGF)

A project team uses a generative AI setup to speed up weekly reporting. A chat-based LLM is connected to the project workspace and automatically:

  • drafts the status report and RAID updates
  • re-sequences tasks in the plan based on new risks
  • emails a “ready to send” update to external stakeholders

Recently, changes appeared in the schedule with no documented rationale, prompts cannot be produced for audit, the PMO says the tool was never approved, and stakeholders are challenging the credibility of the updates.

What is the most likely underlying cause of this AI governance breakdown?

  • A. Too many approvers were required for HITL sign-off
  • B. The team failed to save prompts and outputs in a repository
  • C. No agreed AI tool classification and AI Assistance Plan for an AI agent
  • D. Stakeholders lost trust after spotting hallucinations in a report

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The symptoms point to governance not being tailored to the AI tool category and level of autonomy. An integrated capability that updates plans and triggers communications behaves like an AI agent, which requires explicit decision rights, HITL checkpoints, and audit evidence (e.g., prompt/decision logs) within an AI Assistance Plan. Without that upfront governance, unapproved use, undocumented rationale, and stakeholder distrust are predictable outcomes.

In AIPGF, AI governance should start by identifying the AI tool category and the way it is used in the project, because different categories need different controls. A stand-alone LLM used for drafting is typically a copilot-style assist, while an integrated capability that changes project artefacts and initiates communications behaves like an AI agent (higher autonomy).

When an agent-like use is not formally captured in an AI Assistance Plan (Foundation/Activation), teams often miss essential governance controls:

  • defined HITL decision points and accountable approvers for changes
  • approved-tool and integration authorization (tool register/assurance)
  • prompt, output, and decision logging for auditability and traceability

The root cause is therefore misclassification/under-scoping of the AI use, not the downstream symptoms (trust issues or missing files).

The team treated an autonomous, integrated AI agent as a simple copilot/LLM use, so HITL sign-off, approved-tool controls, and prompt/decision logging were never defined.


Question 6

Topic: Assurance, Metrics, and Continuous Improvement

A public-sector PMO has 12 parallel projects using generative AI to draft business cases, requirements, and stakeholder comms. An internal assurance review finds:

  • Several projects used unapproved AI tools
  • Prompts/outputs are not logged, so decisions are not auditable
  • HITL sign-off for AI-assisted deliverables is inconsistent
  • Stakeholders report declining trust after inconsistent messaging
  • Benefits tracking shows smaller-than-expected time savings

The PMO wants to improve AI governance maturity across the whole portfolio. What is the most likely underlying cause of these symptoms?

  • A. Project teams failed to obtain HITL approval on key deliverables
  • B. No portfolio-level mechanism to share and standardise AI governance good practices
  • C. The selected AI tools are inherently unsuitable for project work
  • D. Stakeholders distrust AI because some outputs contained errors

Best answer: B

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: The pattern of inconsistent controls across many projects points to a systemic maturity gap, not a single project mistake. Continuous improvement at scale requires capturing lessons and assurance findings, then sharing standard practices (e.g., minimum controls, templates, decision rights) so every project applies a consistent baseline.

At portfolio scale, repeated AI governance failures (unapproved tools, missing prompt/output records, inconsistent HITL sign-off, and uneven benefits) usually indicate a weak continuous-improvement capability rather than isolated non-compliance. Good practice in AIPGF terms is to operationalise learning across projects: use assurance and evaluation findings to update shared guidance, and then embed it into activation for new work.

Practical mechanisms include:

  • A portfolio AI governance playbook (minimum controls and decision rights)
  • Standard artefacts (AI Assistance Plan, decision log, prompt/output logging guidance)
  • A community of practice and lessons-learned cadence
  • A feedback loop from assurance/metrics into updated standards

The key takeaway is that raising baseline maturity depends on systematic sharing and standardisation, not only correcting individual symptoms.

Without a shared playbook and feedback loop, projects apply inconsistent controls, tools, and assurance, producing the observed breakdowns at scale.


Question 7

Topic: Foundations of AI Project Governance (AIPGF)

You are in the Foundation stage of a customer-service transformation programme. The programme manager wants to use generative AI to speed up planning.

Constraints:

  • Only approved tools may be used.
  • Drafting internal PM artifacts may use AI, but any external-facing content must be human-approved (HITL).
  • Customer data is classified Confidential; no pasting into public tools.
  • AI use must be documented and agreed before the Activation governance gate.

Which is the BEST next action?

  • A. Use a public chatbot to draft customer emails with names removed
  • B. Start generating customer emails under HITL and document it later
  • C. Update AI Assistance Plan; separate PM vs deliverable AI; request gate approval
  • D. Treat it as a normal programme because AI is only drafting documents

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: An AI-assisted project/programme exists when AI is used to support project management and/or to create project deliverables. In Foundation, the right move is to explicitly record intended AI uses, distinguishing internal PM assistance from AI used in deliverables, and obtain the required governance agreement before the Activation gate while honoring HITL and data constraints.

In AIPGF, a project/programme is AI-assisted when AI is used to help manage the work (e.g., drafting a charter, plans, status reports) and/or to produce deliverables (e.g., customer communications, requirements content, design outputs). The distinction matters because AI used in deliverables typically needs stronger governance: clear decision rights, HITL approval, and evidence that confidentiality and stakeholder trust requirements are met.

Because this is the Foundation stage and the scenario states AI use must be agreed before the Activation gate, the best next action is to capture the intended AI usage in an AI Assistance Plan, explicitly separating:

  • AI for PM management artifacts
  • AI for deliverable content (external-facing)

Then route it through the Activation governance gate for approval and accountability.

It defines the work as AI-assisted, distinguishes PM assistance from deliverable use, and creates the required approval/evidence before Activation.


Question 8

Topic: Values, Behaviours, and Culture

A programme team is adopting generative AI to draft stakeholder updates and first-pass requirements. The AIPGF repository contains several templates, but the programme is small, time-boxed, and handles some commercially sensitive information.

The sponsor says: “We must follow the AIPGF, but I don’t want bureaucracy. Use the spirit of the framework.”

Which approach BEST reflects embracing the spirit of the AIPGF while maintaining appropriate governance?

  • A. Skip templates entirely and rely on team expertise, since speed is the priority
  • B. Tailor a lightweight AI Assistance Plan, define HITL approvals and data boundaries, and keep a simple decision/risk log
  • C. Let the AI tool publish routine updates directly to stakeholders to improve efficiency
  • D. Mandate full completion of every AIPGF template to prove compliance, regardless of project risk

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: Embracing the spirit of the AIPGF means achieving its governance outcomes (proportionate assurance, clear accountability, transparency, and human oversight) without treating templates as mandatory paperwork. A tailored, lightweight set of artifacts and controls can be sufficient when it is driven by risk and the use case. In this scenario, sensitive information and external communications still require defined boundaries and HITL approval.

The spirit of the AIPGF is outcome-focused: use professional judgement to tailor governance so AI is used ethically, efficiently, and effectively while still being controlled and auditable. Templates are aids, not the goal. In a small, time-boxed programme, you can keep governance lightweight, but you should still preserve core outcomes such as:

  • Clear decision rights and accountability for AI-assisted outputs (HITL)
  • Transparent rules on what data can be shared with AI tools
  • Proportionate assurance evidence (e.g., brief risk/decision records)

This balances Human-Centricity and Transparency with pragmatic delivery, rather than maximizing documentation or speed at the expense of safeguards.

It applies AIPGF intent by scaling controls to risk while preserving accountability, transparency, and auditability.


Question 9

Topic: Values, Behaviours, and Culture

You are in the Activation stage, producing a steering-committee progress report. A team member used a generative AI tool to draft the “Key Risks and Mitigations” section based on their notes.

Exhibit: AI Assistance Plan (excerpt)

Use case: Draft report narrative (AI-assisted)
Data classification: Confidential (no client identifiers in prompts)
Tool: Approved enterprise AI only
HITL: PM to sense-check; Risk Owner to validate content
Governance gate: Steering committee pack requires evidence of validation

The draft includes confident-sounding risk statements and two mitigations that seem “too generic.” Stakeholder trust is fragile after a recent reporting error.

What is the BEST next action?

  • A. Paste the confidential risk register into AI for accuracy
  • B. Submit the draft with an AI-use disclaimer in the footer
  • C. Validate each risk with owners and log evidence before submission
  • D. Re-prompt the AI to produce more specific mitigations

Best answer: C

What this tests: Values, Behaviours, and Culture

Explanation: Sensibility behaviours require treating AI-assisted text as a draft that must be sense-checked and validated by accountable humans before it becomes an official deliverable. Here, the AI Assistance Plan mandates HITL validation by risk owners and evidence for the governance gate, and stakeholder trust is already fragile. The best next step is to verify the content against authoritative sources and record the validation trail before submission.

In AIPGF, sensibility behaviours mean avoiding overconfidence in AI outputs, actively sense-checking for plausibility, and validating against authoritative sources with clear accountability. In the Activation stage, AI can accelerate drafting, but the deliverable still must meet governance expectations and maintain stakeholder trust.

Given the AI Assistance Plan constraints, the PM should:

  • Perform an initial plausibility check (e.g., specificity, alignment to project context).
  • Ask each accountable risk owner to confirm or correct risks/mitigations.
  • Record validation evidence (e.g., decision log/approvals) to pass the governance gate.

Improving prompts can help wording, but it does not replace human validation and accountable approval for AI-assisted content.

This applies HITL sensibility by checking AI output against accountable sources and capturing validation evidence required by the governance gate.


Question 10

Topic: Roles, Responsibilities, and Accountabilities

A project team plans to procure a third-party generative AI tool to draft project deliverables. Before any project information is uploaded, the procurement pack requires the supplier to provide evidence of: where prompts/outputs are stored, whether data is used for model training, security controls, retention/deletion, IP ownership of outputs, and what audit logs the project can access.

Which AIPGF principle is this practice primarily applying?

  • A. Adaptability
  • B. Human-Centricity
  • C. Transparency
  • D. Evaluation stage intent

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: This procurement approach is about ensuring the supplier’s AI use is explainable and inspectable through clear disclosures and evidence (data flows, retention, training use, IP terms, and audit logs). That is the essence of the AIPGF Transparency principle, enabling informed approval and ongoing assurance when using third parties.

In AIPGF, third-party AI tools introduce governance risks (confidentiality, privacy, IP leakage, security exposure, and limited auditability). The Transparency principle addresses these risks by requiring clear, evidence-backed visibility of how the AI service operates and handles project information.

In practice this means requesting (and contracting for) items such as:

  • Data processing locations, retention/deletion, and training-use policy
  • Security controls and incident reporting
  • Ownership/licensing of AI outputs and supplier indemnities
  • Accessible logs and rights to audit/assurance evidence

Human-Centricity and Adaptability may still matter, but the described control is primarily about disclosure and audit-ready evidence.

It focuses on making third-party data handling, IP, security, and auditability visible and evidenced so decisions can be assured.


Question 11

Topic: Foundations of AI Project Governance (AIPGF)

You are onboarding to a project where a generative AI assistant will be used. Review the excerpt from the AI Assistance Plan.

AI Assistance Plan (excerpt)
Use case 1: Draft Project Charter (v0.x) from sponsor notes
Use case 2: Propose initial project risks and mitigations
Use case 3: Create stakeholder map (influence/interest grid)
Use case 4: Draft weekly status update for PM review
HITL: PM/risk owner validate before sharing externally

Which interpretation best reflects how the AI tool is intended to assist project work?

  • A. Autonomously approve and publish project deliverables
  • B. Create first drafts of key project documents and analyses
  • C. Build and deploy a machine-learning model for the business
  • D. Replace stakeholder engagement with automated messaging

Best answer: B

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The exhibit lists typical AI-assisted project tasks: drafting a charter, proposing risks, mapping stakeholders, and drafting communications. These are supportive activities that accelerate PM work while keeping accountability with people. The HITL line confirms humans validate outputs before external use.

In AI-assisted projects, a common pattern is using AI to accelerate knowledge work by creating first drafts and structured analyses, then applying Human-in-the-Loop review before anything is treated as an agreed project artifact. In the exhibit, the AI is explicitly planned to:

  • Draft the Project Charter from sponsor notes
  • Propose initial risks and mitigations
  • Produce a stakeholder influence/interest map
  • Draft a weekly status update for the PM to review

These are standard examples of AI supporting project delivery (document drafting, risk assessment support, stakeholder mapping, and communications), not autonomous decision-making or replacing engagement. The key takeaway is “AI drafts; humans validate and approve.”

The plan shows AI drafting the charter, risks, stakeholder mapping, and status communications with human validation.


Question 12

Topic: Assurance, Metrics, and Continuous Improvement

A PMO is rolling out AI assistance (summarising meetings, drafting status reports, and generating RAID items) across multiple projects. The assurance lead proposes using the AI Project Governance Framework (AIPGF) plus the AI Project Governance Capability Maturity Model (AIPG-CMM) to tailor controls and plan continuous improvement.

The sponsor says, “We’re already mature enough on governance—just implement the framework.”

What should the assurance lead ask to verify first to apply the AIPG-CMM in a way that complements the AIPGF?

  • A. The detailed sprint backlog for the next quarter
  • B. Which model architecture the AI tools are built on
  • C. Current and target AIPG-CMM maturity, with evidence
  • D. The AI vendor’s roadmap for the next 12 months

Best answer: C

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: AIPG-CMM is a capability maturity model used to assess how developed an organisation’s AI governance practices are and to define a realistic improvement path. To complement AIPGF, you first need a baseline maturity level (supported by evidence) and an agreed target maturity so governance activities, assurance, and metrics can be proportionate to current capability and risk appetite.

The AIPG-CMM is a capability maturity model for AI project governance: it helps an organisation assess its current governance capability (what practices exist and how consistently they are applied) and define a target maturity level to guide improvement. It complements the AIPGF by informing how rigorously to apply AIPGF controls and artifacts across the life cycle (Foundation, Activation, Evaluation), and by turning “use the framework” into a measurable improvement roadmap.

To use a maturity model effectively, the first input is the maturity baseline and the intended target, supported by evidence (e.g., repeatable processes, decision rights, assurance checks, metrics, and auditability). Without that, tailoring and continuous improvement become guesswork.

AIPG-CMM complements AIPGF by baselining and targeting governance capability so controls and improvements can be scaled appropriately.


Question 13

Topic: AI in Projects and Organizations (Context)

A programme is using an enterprise-approved generative AI assistant to help produce project documentation. The AI Assistance Plan states:

  • Only work on content classified “Internal” (no personal data or client confidential data)
  • All AI outputs must be reviewed and clearly marked as AI-assisted
  • The accountable role (e.g., sponsor/board) makes decisions; AI is decision support only (HITL)

Which use of the AI should the project manager AVOID?

  • A. Submitting an AI-generated benefits update with unverified estimates and no human review
  • B. Summarising internal workshop notes for circulation after a human review and clear AI-assisted labelling
  • C. Drafting a first-pass status report from existing internal metrics, then reviewing and marking it AI-assisted
  • D. Using AI to propose decision options and risks for a change request, then having the change authority decide

Best answer: A

What this tests: AI in Projects and Organizations (Context)

Explanation: Under AIPGF, AI may assist documentation and decision support only with Human-in-the-Loop controls and transparent labelling. The use to avoid is any practice that lets unverified AI output directly drive governance decisions. Submitting AI-generated benefits figures without evidence or review creates assurance, trust, and auditability failures.

The core distinction is between acceptable AI assistance (drafting, summarising, proposing options) and unacceptable AI substitution for accountable judgment or evidence. In AI-assisted projects, AI outputs can be non-deterministic and may hallucinate, so governance requires HITL review, clear labelling, and decision rights staying with accountable roles. In this scenario, drafting content from approved internal sources and then reviewing/marking it fits Human-Centricity and Transparency and supports auditability. By contrast, sending unreviewed, unverified AI-generated benefits estimates to a steering group turns AI into an implicit decision-maker and introduces unsupported claims into formal reporting, undermining assurance and stakeholder trust.

This bypasses HITL and transparency and risks hallucinated/unsupported figures influencing governance decisions.


Question 14

Topic: Values, Behaviours, and Culture

In the APMG AI Project Governance Framework (AIPGF), which term best describes the expectation to tailor AI governance to the project’s context while still remaining aligned with organisational policy and relevant external standards?

  • A. AIPG-CMM maturity level
  • B. Adaptability (principle)
  • C. Human-Centricity (principle)
  • D. Transparency (principle)

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: The AIPGF principle of Adaptability focuses on pragmatic tailoring: applying the framework in a way that fits the project’s size, risk, and maturity while still complying with mandated organisational controls and external standards. It supports “fit for purpose” governance rather than a one-size-fits-all approach.

Adaptability in AIPGF is the principle that governance for AI-assisted projects should be scaled and tailored to real project conditions (e.g., use-case criticality, data sensitivity, uncertainty, and organisational AI maturity) while staying within non-negotiable constraints such as organisational policy, assurance requirements, and applicable external standards. In practice, it drives proportional controls and evidence: more rigor where risk is higher, lighter touch where risk is low, but never bypassing required approvals, safeguards, or auditability. This embodies the framework’s pragmatic sensibility—use what is needed to govern effectively, not what is merely possible or fashionable.

Adaptability is tailoring governance proportionally to context without breaking required policies or standards.


Question 15

Topic: Foundations of AI Project Governance (AIPGF)

A programme is introducing an AI tool to draft personalised eligibility decision letters to citizens. The letters will be sent externally and may include personal data.

Exhibit: AI Assistance Plan (excerpt)

Use case: Draft eligibility decision letters
Output use: Sent to citizens with minimal edits
Data: Names, addresses, case notes (PII)
Risk/impact: High (wrong decision may cause harm)
Proposed oversight: Human-on-the-loop
Control: Weekly sample audit of 5% of letters

Based on the exhibit, what is the best next governance action?

  • A. Change to human-in-the-loop approval before any letter is sent
  • B. Keep human-on-the-loop but increase the audit sample size
  • C. Keep human-on-the-loop because the tool only drafts and humans can edit
  • D. Defer oversight decisions until the Evaluation stage proves accuracy

Best answer: A

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The exhibit states the output will be sent externally with minimal edits and has high impact if wrong, using personal data. That combination calls for human-in-the-loop oversight so a human explicitly reviews and approves each AI-assisted letter (and its underlying decision) before release, rather than relying on after-the-fact monitoring.

Human-in-the-loop (HITL) means a human must review and approve each AI-assisted output (or the decision it supports) before it is used, creating a hard control point. Human-on-the-loop (HOTL) is supervisory monitoring (sampling, dashboards, exception review) where outputs can proceed without per-item approval.

In the exhibit, the AI output is used externally, includes PII, will be sent with minimal edits, and the impact is rated high. In AIPGF terms, this risk/impact profile requires HITL for release/communication decisions so accountability is explicit and preventable harms are stopped before dissemination. HOTL sampling can still be added as an additional assurance layer, but it is not sufficient as the primary control here.

High-impact, externally used outputs with PII require per-item human review/decision, not only retrospective sampling.


Question 16

Topic: AI in Projects and Organizations (Context)

During Activation of a public-sector procurement project, the team uses a cloud GenAI assistant to summarise bidder proposals and draft evaluation notes. The AI Assistance Plan lists the information as “Official” but leaves Records retention / prompt-output archiving as “TBD”, and no archive is configured. Two weeks later, a supplier requests the evaluation record under procurement rules.

What is the most likely near-term impact?

  • A. Schedule slippage caused by extensive AI tool retraining
  • B. Limited ability to evidence the scoring rationale to audit/challenge
  • C. Improved stakeholder confidence due to faster turnaround times
  • D. Gradual reduction in summary quality as the model drifts

Best answer: B

What this tests: AI in Projects and Organizations (Context)

Explanation: Leaving records retention and prompt/output archiving undefined creates an immediate auditability gap. In regulated settings like public procurement, the team must be able to produce decision records and supporting evidence on request. Without retained AI-assisted evaluation artefacts, the project’s risk exposure to audit findings or supplier challenge rises quickly.

A common legal/compliance challenge with AI-assisted work is records retention: if AI-generated or AI-assisted content contributes to decisions, it becomes part of the project record. In public-sector procurement, transparency and traceability are especially important because decisions can be audited or challenged.

When prompts, outputs, and HITL approvals are not retained, the near-term consequence is an inability to evidence how conclusions were reached (who approved what, using which inputs). A proportionate governance response is to update the AI Assistance Plan to specify:

  • what AI interactions are retained (prompts, outputs, versions)
  • where they are stored and for how long
  • who can access them and how approvals are logged

The key takeaway is that missing retention controls shows up quickly as an auditability and challenge-risk issue, not as a performance or training problem.

Without retained prompts/outputs and AI-assisted notes, the project lacks an auditable record to justify procurement decisions.


Question 17

Topic: Principles for Responsible and Trustworthy AI

During Activation, a programme team plans to use a generative AI tool to produce weekly recommendations for schedule recovery actions. The sponsor will only approve use if the team can show that each AI-assisted recommendation is transparent—it includes the underlying rationale and supporting evidence (not just suggested actions) and is checked via HITL.

Which artifact/evidence best validates this control is designed and ready to operate?

  • A. Benefits tracker showing time saved using AI each week
  • B. AI Assistance Plan requiring rationale, evidence, and HITL sign-off
  • C. RACI naming an AI tool owner and an approver
  • D. Decision log with pasted AI outputs for past meetings

Best answer: B

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Transparency in AI-assisted recommendations requires more than an output—it needs documented rationale, evidence/source traceability, and a defined human check before recommendations are adopted. An AI Assistance Plan is purpose-built to specify these mandatory fields and the HITL review steps, demonstrating operational readiness of the control rather than outcomes or roles alone.

Under the Transparency principle, AI-assisted recommendations should be explainable and defensible: what is being recommended, why, and what evidence supports it. In AIPGF, the strongest readiness evidence is an artifact that defines the control so it can be executed consistently (not just a record of past text).

An AI Assistance Plan can operationalise this by specifying, for the recommendation workflow:

  • required content (recommendation, rationale, evidence/sources, assumptions/limits)
  • traceability expectations (links to project data or referenced documents)
  • HITL review and approval before recommendations are used

Artifacts that only show benefits, roles, or unstructured AI text do not demonstrate that the transparency control will reliably occur each time.

It directly evidences a defined, repeatable control that forces AI recommendations to include rationale and sources, with human review before use.


Question 18

Topic: AI in Projects and Organizations (Context)

A PMO has completed two successful pilots using a generative-AI assistant to draft project status reports and risk summaries. The CIO now wants to roll this capability out to 40 projects and asks for evidence that the organization has progressed from “piloting” to “scaling” so governance depth can be increased proportionately (e.g., clearer decision rights, consistent HITL checks, and monitoring).

Which artifact/evidence best validates this readiness in the current context?

  • A. A completed AIPG-CMM maturity assessment with objective evidence
  • B. A Benefits Tracker forecasting time savings from portfolio-wide rollout
  • C. A dashboard showing number of active users and prompts per week
  • D. An AI Assistance Plan from one of the pilot projects

Best answer: A

What this tests: AI in Projects and Organizations (Context)

Explanation: To tailor governance depth, the PMO needs evidence of organizational AI adoption maturity, not just pilot documentation or usage/benefit signals. An AIPG-CMM assessment is designed to validate whether repeatable controls, roles, and assurance practices exist to support scaling. It provides a structured basis to justify increasing governance beyond pilot-level oversight.

AI adoption maturity is reflected in whether the organization can apply AI governance consistently beyond isolated pilots. Moving from “piloting” to “scaling” typically requires evidence of repeatable, organization-level capabilities such as defined decision rights and accountabilities, standardized HITL validation, consistent risk and data controls, and readiness to monitor performance and issues across many projects.

An AIPG-CMM (capability/maturity) assessment is the most direct validation because it evaluates these capabilities in a structured way and can be supported by objective evidence (e.g., documented processes, approvals, training/assurance practices, monitoring routines). This is more suitable for proving scale-readiness than artifacts that describe a single pilot or measure adoption/benefits without demonstrating control effectiveness.

A maturity assessment directly evidences whether governance capabilities are in place to move from piloting to scaling.


Question 19

Topic: Framework Structure and Controls

You are in Foundation for a programme that will use a generative AI assistant to help draft supplier evaluation summaries that feed a -30m contract award recommendation.

Constraints:

  • Data classification: Commercial-in-Confidence; data must stay in the corporate AI sandbox
  • Approved tools: only the corporate AI sandbox is approved (no public LLMs)
  • HITL: humans must verify any AI-assisted content before use
  • Governance gate: the Activation Go/No-Go requires evidence proportionate to risk; the procurement director has requested independent assurance to protect stakeholder trust after a previous supplier challenge

What is the BEST next action?

  • A. Commission independent assurance and record it in the AI Assistance Plan
  • B. Proceed with internal peer review and capture approvals in the decision log
  • C. Switch to an approved tool now and skip independent assurance
  • D. Defer assurance until after award, during Evaluation stage lessons learned

Best answer: A

What this tests: Framework Structure and Controls

Explanation: Because the AI-assisted outputs influence a high-value, challenge-prone procurement decision, the risk/impact is high and stakeholder confidence is a stated constraint. In AIPGF, that combination justifies independent assurance, planned up front and available as evidence at the Activation Go/No-Go gate, alongside the HITL arrangements.

In AIPGF, assurance is scaled to the decision’s risk/impact and to stakeholder expectations for trust and auditability. Here the AI assistance supports a major contract award recommendation, using Commercial-in-Confidence information, and the procurement director explicitly expects independent assurance due to prior challenge history. The best next action in Foundation is therefore to plan and initiate independent review/assurance (and capture it in the AI Assistance Plan) so that the required evidence exists before the Activation Go/No-Go.

Practical outcomes to evidence at the gate include:

  • the agreed assurance scope (what is independently checked)
  • clear HITL verification and accountable approvers for the recommendation
  • traceable records (assumptions, prompts/inputs, decisions)

Tool choice matters, but it does not remove the need for proportionate independent assurance when the decision and trust stakes are high.

High-impact, contested decisions plus explicit stakeholder expectations require independent assurance to be planned and evidenced before the Activation gate.


Question 20

Topic: Assurance, Metrics, and Continuous Improvement

A programme is using a generative AI tool to draft status reports and risk narratives. The PMO adds a quarterly review to its AI Assistance Plan to sample AI-assisted outputs, check compliance with data-handling rules, log findings, assign owners to corrective actions, and confirm changes have been embedded in ways of working.

In AIPGF, this practice most directly maps to which life cycle stage intent?

  • A. Foundation stage intent
  • B. Transparency principle
  • C. Evaluation stage intent
  • D. Activation stage intent

Best answer: C

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: The described activity is an ongoing assurance mechanism: scheduling periodic reviews, sampling outputs, documenting findings, and ensuring corrective actions are implemented. That aligns with the Evaluation stage intent of monitoring AI usage in operation and driving continuous improvement based on evidence. The emphasis is not initial setup or rollout, but sustained review and follow-through.

Planning periodic audits/reviews and following up on findings is a core Evaluation-stage activity in AIPGF. Once AI is being used to support project delivery, governance should include a repeatable cadence to test whether AI assistance is still being used as intended, whether controls are working (e.g., data handling and quality checks), and whether issues are being corrected rather than merely observed.

A good Evaluation loop typically includes:

  • A defined review cadence and sampling approach
  • Documented findings with accountable owners
  • Corrective/preventive actions tracked to completion
  • Updates to the AI Assistance Plan, risk register, and ways of working

This goes beyond “being transparent” and focuses on evidence-based assurance and continuous improvement at scale.

It is a planned, periodic assurance loop that audits AI use and tracks follow-up actions for continuous improvement.


Question 21

Topic: Roles, Responsibilities, and Accountabilities

A programme has three projects using generative AI to draft status reports and requirements summaries. A project team proposes using a public cloud AI assistant to speed up document drafting.

Exhibit: AI Assistance Plan (decision rights excerpt)

Decision: Approve AI tool for INTERNAL use (no sensitive data)
Authority: Project Manager
Decision: Approve shared prompts/templates across projects
Authority: Programme Manager
Decision: Approve EXTERNAL/public AI use or sensitive/client data
Authority: Portfolio AI Governance Board

Based on the exhibit, what is the best next action?

  • A. Have the Project Manager approve the tool for immediate use
  • B. Have the Programme Manager approve, since multiple projects use AI
  • C. Escalate the decision to the Portfolio AI Governance Board
  • D. Allow use now and review the decision at the Evaluation gate

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The exhibit defines decision rights by risk and scope. Because the proposal involves a public/external AI assistant and could involve sensitive or client data, the approval must be made at the portfolio governance level. This ensures consistent risk appetite, assurance, and accountability beyond a single project or programme.

AIPGF decision rights should place higher-risk, higher-impact, and cross-organisational choices at higher governance levels to match accountability and risk appetite. The exhibit explicitly distinguishes routine internal AI use (project level) from programme-level standardisation decisions (shared prompts/templates) and from decisions that materially increase exposure (external/public AI use or sensitive/client data). Since the proposed tool is public/external and the scenario involves drafting project deliverables that may contain client or sensitive information, the decision must be escalated to the portfolio AI governance body for approval and any required controls (e.g., data handling constraints, HITL checks, assurance evidence).

When decision rights are clear in an AI Assistance Plan, the correct action is to follow the assigned authority rather than shortcutting via local approvals.

The plan assigns external/public AI use or sensitive/client data decisions to the portfolio governance level.


Question 22

Topic: Assurance, Metrics, and Continuous Improvement

An internal AIPGF Evaluation-stage assessment of an AI-assisted programme management office (PMO) found recurring issues across multiple workstreams.

Exhibit: Assessment findings (excerpt)

- AI tool usage: 3 teams used unapproved generative AI tools.
- HITL: No named accountable approver for AI-generated schedule/risk updates.
- Traceability: Prompts/outputs not stored; decisions cannot be audited.
- Stakeholders: Sponsors report low confidence in AI-assisted reports.
- Benefits: Forecasted cycle-time reduction not evidenced.

Based on these results, what is the most likely underlying cause of the governance breakdown?

  • A. Teams are using undocumented prompts and not storing outputs
  • B. Benefits tracking was started too late to evidence improvements
  • C. Missing AI Assistance Plan defining roles, decision rights, and required controls
  • D. The generative AI tool is unreliable, causing stakeholder distrust

Best answer: C

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: The findings show systemic failures across approvals, traceability, and tool selection—not a single team behavior or tool defect. In AIPGF terms, this points to a gap in governance design and activation: roles and decision rights for HITL sign-off were not defined, and minimum controls (approved tools, prompt/output logging, assurance evidence) were not set up. An AI Assistance Plan is the primary artifact that consolidates these requirements proportionately.

Assessment results that show multiple, repeating control failures (unapproved tools, no accountable HITL approver, no auditable records, low trust, and weak benefits evidence) typically indicate a governance activity gap rather than an isolated operational issue. In AIPGF, this most often traces back to inadequate Foundation/Activation work: the programme did not establish and embed an AI Assistance Plan (and supporting RACI/decision log expectations) that defines who can approve AI use, who signs off AI-assisted outputs, what must be recorded for transparency, and how benefits will be evidenced. When decision rights and required controls are unclear, teams improvise, assurance evidence is missing, and stakeholder confidence drops. Fixing the root cause requires defining and operationalising governance, not just addressing individual symptoms.

The pattern of unapproved use, absent HITL accountability, and no audit trail indicates governance was not designed and embedded via an AI Assistance Plan.


Question 23

Topic: Roles, Responsibilities, and Accountabilities

A programme team uses a generative AI tool to draft the executive summary and key claims for a Board paper requesting approval to proceed to procurement (a go/no-go decision). The organisation requires a clear audit trail showing who approved each claim and what evidence was used.

Which action best defines HITL responsibility for reviewing and signing off this AI-assisted deliverable?

  • A. Add a disclaimer that the paper was AI-assisted and proceed without a named approver to avoid delaying the decision
  • B. Ask the AI tool vendor to certify the accuracy of the Board paper before submission
  • C. Let the project manager approve the AI-generated executive summary because it is a project deliverable
  • D. Name an accountable human approver (e.g., Sponsor) to review, amend, and sign off the final Board paper, recording the approval and evidence in the decision log

Best answer: D

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: For AI-assisted outputs used to support a high-impact go/no-go decision, AIPGF expects explicit Human-in-the-Loop accountability: a named human with decision rights must review, adjust as needed, and approve the final content. Because the organisation requires strict auditability, the approval and supporting evidence should be captured in governance records (e.g., decision log and the AI Assistance Plan’s defined sign-off points).

HITL accountability in AIPGF means AI can assist, but a designated human remains responsible for the delivered content and its consequences. When an AI-assisted artefact is used to influence a high-impact decision (such as a Board go/no-go) and the organisation requires auditability, governance must make the approval chain explicit.

The appropriate control is to:

  • define the approver and decision rights in the AI Assistance Plan (who reviews, who signs off)
  • ensure a competent human reviews and edits the AI-assisted content against evidence
  • record the approval, rationale, and supporting evidence in an auditable record (e.g., decision log)

This preserves transparency and accountability while still enabling AI assistance; “someone else” (tool vendor or general team ownership) cannot replace accountable human sign-off.

A high-impact, auditable decision requires a clearly accountable human sign-off with recorded rationale and evidence, not an AI tool or delegated “ownership” without approval traceability.


Question 24

Topic: Framework Structure and Controls

You are the PM for a programme that wants to use a generative AI assistant to speed up delivery by drafting a business case, stakeholder communications, and weekly status reports. No AI Assistance Plan exists yet, and the sponsor has only said, “Use AI wherever it helps.”

To start creating the AI Assistance Plan, what should you ask/confirm FIRST?

  • A. Whether IT can deploy a custom model to improve accuracy
  • B. Which monitoring KPIs will be used during post-implementation evaluation
  • C. Which vendor offers the lowest-cost AI tool for a quick pilot
  • D. Which tasks and outputs will be AI-assisted, and required HITL approvals

Best answer: D

What this tests: Framework Structure and Controls

Explanation: An AI Assistance Plan starts by defining the objectives and scope of AI assistance: what work AI will support, what outputs it may draft, and where humans must review/approve (HITL). Without that boundary, you cannot proportionately set approved tools, data handling rules, oversight, or monitoring expectations.

The first step in creating an AI Assistance Plan is to make the AI use explicit and bounded: the intended objectives and the scope of AI assistance. In this scenario, “use AI wherever it helps” is too vague to govern, so you must confirm which project activities and deliverables AI will support and where human review/approval is mandatory (HITL).

Once the scope is clear, you can then derive the rest of the plan proportionately:

  • identify approved tools that fit the use cases
  • define data rules based on what information will be used
  • assign oversight/decision rights for accepting AI-assisted outputs
  • set monitoring appropriate to the AI-supported deliverables

Anything that jumps straight to tool selection, custom builds, or monitoring details is premature without an agreed AI-assistance scope.

Clarifying the objectives and scope of AI assistance (including HITL decision points) is the foundation for selecting tools, setting data rules, and defining monitoring.


Question 25

Topic: Foundations of AI Project Governance (AIPGF)

You are in the Activation stage of a programme preparing a client-facing benefits case. The team wants to use a generative AI tool to draft the “commercial assumptions” section.

Constraints:

  • HITL: All client-facing AI-assisted content must be reviewed and approved by a named accountable SME.
  • Approved tools: Only the organisation’s approved enterprise AI environment may be used.
  • Data classification: The section includes Confidential client pricing and contract terms.
  • Governance gate: An assurance review is scheduled tomorrow, and the sponsor has raised concerns about “made-up facts.”

What is the BEST next action?

  • A. Draft it in a free public AI tool using anonymised text
  • B. Use the enterprise AI, then submit without SME review
  • C. Update the AI Assistance Plan and enforce SME approval
  • D. Remove all commercial assumptions from the document for now

Best answer: C

What this tests: Foundations of AI Project Governance (AIPGF)

Explanation: The situation highlights two key AI output risks: confidentiality/IP leakage from using an unapproved public tool and hallucination in client-facing commercial claims. In Activation, the right governance response is to document and control AI use through an AI Assistance Plan and to enforce HITL review by an accountable SME, producing evidence for assurance at the gate.

AI-assisted drafting of commercial assumptions creates high-impact risks: (1) hallucinated or biased assumptions that could mislead the client, and (2) confidentiality/IP leakage if sensitive pricing or contract terms are entered into an unapproved tool. In the Activation stage, governance should operationalise controls, decision rights, and evidence for assurance.

The best next action is to update and apply the AI Assistance Plan to ensure:

  • only the approved enterprise AI environment is used (data protection/control),
  • Confidential data is minimised and handled per classification,
  • a named SME performs HITL review and approval for accuracy and accountability,
  • the approach is auditable for the governance gate (what was used, how validated).

This builds stakeholder trust while keeping delivery moving under controlled, accountable use.

It addresses confidentiality and hallucination risks by using approved tooling, documenting controls, and enforcing HITL accountability before the governance gate.

Questions 26-40

Question 26

Topic: Framework Structure and Controls

In the APMG AI Project Governance Framework (AIPGF), which term best describes the governance artifact that documents the objectives and scope of AI use in a project, the approved AI tools, data handling rules, required oversight (including Human-in-the-Loop), and the approach to monitoring AI-assisted outputs?

  • A. AI risk register
  • B. Project Management Plan
  • C. Model card
  • D. AI Assistance Plan

Best answer: D

What this tests: Framework Structure and Controls

Explanation: An AI Assistance Plan is the AIPGF governance artifact that sets the boundaries and controls for AI use in an AI-assisted project. It clarifies what AI will be used for, which tools are approved, what data may be used, and how humans will review and monitor AI-assisted outputs across the work.

The core concept is that AI-assisted work needs explicit, auditable controls beyond a normal project plan because AI outputs can be non-deterministic and may introduce confidentiality, privacy, bias, or hallucination risks. In AIPGF, the AI Assistance Plan is the primary artifact used to govern AI use on a project by documenting, in one place, the intended objectives and scope for AI assistance, the approved tools and usage constraints, the data handling rules (including sensitive and proprietary information), the required Human-in-the-Loop oversight and decision rights, and how AI-assisted outputs will be monitored and reviewed over time. It provides a clear basis for stakeholder assurance and for consistent, repeatable use of AI within agreed guardrails.

Key takeaway: it is about governing AI use and oversight, not merely planning the project or describing a model.

It is the artifact that defines how AI may be used, with approved tools, data rules, oversight, and monitoring expectations.


Question 27

Topic: Roles, Responsibilities, and Accountabilities

A programme team uses a generative AI assistant to draft weekly highlight reports and risks. In the last month, the following occurred:

  • The sponsor challenged a report but no one could show a HITL approval record.
  • Prompts and outputs were not captured in the project repository.
  • A team member used an unapproved AI tool “because it was faster.”
  • Stakeholders now distrust the reports, and expected time-saving benefits are not being realised.

The PM says, “We already have a sponsor, PM, and PMO—so I assumed AI governance would be covered.”

What is the MOST likely underlying cause of this AI governance breakdown?

  • A. Stakeholders were not trained on how AI supports the programme
  • B. AI decision rights weren’t assigned to existing project roles
  • C. The AI tool’s output quality was too poor for reporting
  • D. The team failed to store prompts in a central repository

Best answer: B

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The symptoms point to missing accountability rather than a single operational lapse. In AIPGF, AI governance should be integrated into existing project roles by defining decision rights for AI tool approval, HITL sign-off, and audit evidence (e.g., prompt/output capture). When those responsibilities are not explicitly assigned, teams improvise, controls are skipped, and trust and benefits erode.

This is a roles-and-accountabilities failure: the programme has standard roles (sponsor, PM, PMO) but has not explicitly embedded AI governance responsibilities into them. In AIPGF terms, AI-assisted outputs still need clear decision rights and accountable approvers (HITL), plus agreed controls for transparency and assurance such as documenting prompts/outputs and enforcing approved-tool use.

A practical fix is to update existing governance artifacts (e.g., an AI Assistance Plan and RACI/decision log) so that:

  • the sponsor (or delegated authority) owns acceptance of AI-assisted deliverables
  • the PM owns process compliance and evidence capture
  • the PMO provides standards/assurance and monitors adherence

Focusing only on one symptom (like prompt storage) won’t resolve the broader accountability gap.

Without explicit accountability (e.g., PM/sponsor/PMO) for AI tool use, HITL sign-off, and recordkeeping, these controls predictably lapse.


Question 28

Topic: Lifecycle Governance (Initiate to Operate)

During the Foundation stage of a transformation programme, the PMO starts using a generative AI tool to draft project deliverables (communications, RAID items, and a procurement approach). Within a month:

  • Team members have used an unapproved, public AI tool and pasted extracts from supplier proposals and a draft contract.
  • Prompts and AI outputs are not recorded, so decisions cannot be evidenced.
  • No Human-in-the-Loop (HITL) sign-off is documented before content is shared.
  • Legal and key stakeholders now distrust the outputs, and the expected time-saving benefits have stalled after the tool is blocked.

Which is the most likely underlying cause of this governance breakdown?

  • A. No upfront assessment of data rights/classification for intended AI use
  • B. Prompts and outputs were not documented in a decision log
  • C. Stakeholders were not kept informed about AI usage on the programme
  • D. HITL review roles and approval checkpoints were not assigned

Best answer: A

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: The earliest causal failure is not establishing whether the programme’s data was available, fit-for-purpose, and legally/contractually permitted for the chosen AI usage. That gap leads directly to people pasting confidential material into unapproved tools and then losing stakeholder trust when it is discovered. In AIPGF, this should be addressed in Foundation via defined constraints and approvals for AI-assisted work.

In the AIPGF Foundation stage, AI-assisted work should only start after confirming the intended data sources are (1) available and of sufficient quality for the use case, and (2) permitted for that use under confidentiality, privacy, IP, and contract terms. If data rights and classification are not assessed and translated into clear rules (e.g., what can/cannot be entered into AI tools, and which tools are approved), teams often improvise with convenient public tools and sensitive inputs. That typically causes the downstream symptoms seen here: the tool gets blocked, benefits evaporate, stakeholders lose trust, and auditability/HITL approvals become hard to demonstrate. The core fix is to establish data constraints and permissions up front and bake them into the AI Assistance Plan and approval workflow.

Without confirming what data can be used and under what rights, teams default to unsafe tools and inputs, triggering the other failures.


Question 29

Topic: Assurance, Metrics, and Continuous Improvement

A programme is using a generative AI assistant to draft stakeholder communications and meeting minutes. A maturity assessment in the Evaluation stage produced the draft improvement roadmap below.

Improvement roadmap (draft)
Finding: Low Transparency (no record of prompts, versions, or HITL reviews)
Action: Create AI Output Register + HITL review checklist
Trigger: Before expanding AI use to external-facing comms
Owner: TBD
Due date: TBD
Assurance checkpoint: Activation Gate in 4 weeks
Status: Proposed

Which next action best interprets what is missing, based on the exhibit?

  • A. Ask the AI tool vendor to set the owner and timeline for the action
  • B. Assign an accountable owner and a due date before the Activation Gate
  • C. Treat the action as optional guidance because it is a process improvement
  • D. Defer assigning ownership until after the AI expansion is completed

Best answer: B

What this tests: Assurance, Metrics, and Continuous Improvement

Explanation: The exhibit shows a clear governance improvement action but leaves Owner and Due date as TBD. Under AIPGF continuous improvement, actions from assessments need explicit accountability and a timeline so progress can be monitored and verified. With an Activation Gate in 4 weeks, the action should be time-bound to that checkpoint.

In AIPGF assurance and continuous improvement, assessment findings should translate into an improvement roadmap where each action is actionable, measurable, and governable. That requires (at minimum) a named accountable owner (role or person with decision rights to implement the control) and a target date that aligns with relevant gates or assurance checkpoints.

Here, the programme plans to expand AI use to external-facing communications, while the finding highlights a Transparency control gap (no traceability of prompts/versions/HITL reviews). Because an Activation Gate is scheduled in 4 weeks, the improvement action should be assigned and scheduled to be completed (or demonstrably underway with evidence) before that gate so it can be assured before higher-exposure use.

Deferring or outsourcing accountability weakens governance and makes assurance untestable.

Improvement actions must be owned and time-bound so they can be assured and tracked before the next governance checkpoint.


Question 30

Topic: Principles for Responsible and Trustworthy AI

A project team proposes using a generative AI tool to draft a stakeholder-facing “benefits realisation plan” and meeting summaries.

Exhibit: Proposed AI usage (excerpt)

- Users: PMO analysts
- Input: project status notes and action items
- Output: draft benefits plan + meeting minutes
- Controls proposed: “PM reviews before sending”
- Not addressed: disclosure to recipients, rationale/traceability of AI text,
  feedback loop to improve prompts/controls if errors recur

Which evaluation approach best applies Human-Centricity, Transparency, and Adaptability together to decide whether to approve this AI usage?

  • A. Focus on periodic prompt tuning to reduce recurring errors
  • B. Rely on the PM’s expert judgement to accept or reject drafts
  • C. Require clear disclosure that the outputs were AI-generated
  • D. Confirm HITL quality checks, disclose AI assistance, and set monitoring triggers to refine prompts/controls

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Approving AI-assisted deliverables needs a balanced check against all three AIPGF principles. Human-Centricity is addressed by defined HITL review focused on stakeholder impact, Transparency by disclosing AI assistance and maintaining traceability, and Adaptability by monitoring outcomes and improving controls when issues emerge. The best evaluation approach explicitly covers all three.

Using the three AIPGF principles together means you do not approve AI usage based on a single safeguard (like “PM reviews”) or a single concern (like disclosure). In this scenario, the AI produces stakeholder-facing deliverables, so governance should evaluate the approach by ensuring:

  • Human-Centricity: named humans remain accountable, with HITL checks proportionate to potential stakeholder impact
  • Transparency: recipients and internal assurance can understand that AI assisted and how key statements were derived/verified
  • Adaptability: the team can learn from defects (e.g., hallucinations, misstatements) via monitoring, thresholds, and iterative control improvements

An approval decision is strongest when these are combined into a coherent assurance approach rather than treated as separate, optional add-ons.

It combines user impact oversight (HITL), openness about AI use and traceability, and a plan to adapt based on performance and feedback.


Question 31

Topic: Roles, Responsibilities, and Accountabilities

A small agency is using a generative AI tool to draft the requirements baseline for a public-facing benefits system. Because the team is small, Alex is both the Project Manager and the designated AI Governance Lead.

The sponsor states that, to “keep things fast,” Alex should also approve the AI-assisted requirements baseline for release into configuration control. Stakeholders have low trust in AI outputs and the project must be auditable.

What is the MOST appropriate AIPGF role action to allow one person to hold multiple roles while avoiding conflicts of interest?

  • A. Assign an independent accountable approver and log the decision
  • B. Rely on the AI tool’s built-in audit log as assurance
  • C. Allow Alex to approve if AI prompts are stored
  • D. Remove Alex from all AI governance responsibilities

Best answer: A

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: AIPGF allows one person to fulfil multiple roles, but conflicts of interest must be actively managed through clear decision rights, transparency, and independent challenge. Here the output is a controlled baseline deliverable with low stakeholder trust and auditability needs, so approval must sit with an independent accountable role and be recorded for traceability.

When one person holds multiple governance and delivery roles, AIPGF focuses on making accountability explicit and preventing “self-approval” on high-impact, low-trust, or highly auditable AI-assisted outputs. In this scenario, Alex would be overseeing AI use and also benefiting from rapid acceptance of an AI-assisted baseline, so approving the baseline creates a conflict of interest.

The proportionate control is to:

  • Define decision rights (who is Accountable to approve the baseline)
  • Keep Alex in delivery/governance roles but introduce independent approval (e.g., sponsor, senior user, QA/assurance)
  • Record the role overlap and approval route in an auditable artifact (e.g., AI Assistance Plan and decision log)

Capturing prompts or relying on tool logs supports transparency, but it does not replace independent accountability for the approval decision.

Separating approval from creation/governance and recording it preserves auditability while still allowing Alex to hold multiple roles.


Question 32

Topic: Principles for Responsible and Trustworthy AI

A programme office wants to use an AI assistant to triage and route change requests. The vendor demo includes an “auto-approve” feature for low-effort changes, but some changes can affect customer outcomes and regulatory commitments.

Before go-live, the assurance team asks for evidence that human-centricity overrides automation for high-impact approvals (i.e., a clear Human-in-the-Loop control with accountable decision rights).

Which artifact/evidence best validates this control is defined and ready to operate?

  • A. AI Assistance Plan specifying HITL approval gates and accountable approver
  • B. Risk register entry noting the risk of inappropriate automated approvals
  • C. AIPG-CMM assessment score showing improved governance maturity
  • D. Benefits Tracker showing faster change turnaround after automation

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: To show human-centricity overriding automation, the evidence must document a concrete Human-in-the-Loop control for high-impact approvals, including who is accountable and what cannot be automated. The AI Assistance Plan is the most direct artifact for defining AI use boundaries, required human sign-offs, and how the control will be evidenced and audited at go-live.

Human-centricity in AIPGF means automation must not displace human judgment where decisions have material impact on people, obligations, or significant risk exposure. In this scenario, “auto-approve” is only acceptable when governance explicitly constrains it and ensures a Human-in-the-Loop approval for high-impact changes.

The strongest validation artifact is one that operationalizes the control, for example by documenting:

  • which change types are classified as high-impact
  • that AI may recommend/route but cannot approve high-impact items
  • the accountable approver (decision rights) and required evidence/audit trail

An AI Assistance Plan is designed to set these boundaries and checkpoints for AI-assisted work, demonstrating readiness to operate the control rather than merely describing outcomes or risks.

An AI Assistance Plan can explicitly prohibit automated approval for high-impact changes and define the human approval checkpoints, decision rights, and audit evidence.


Question 33

Topic: Values, Behaviours, and Culture

A project manager uses an AI assistant to draft a benefits forecast and a delivery plan. The tool recommends dropping several validation activities to meet an aggressive date, but the manager considers organisational risk appetite and past audit findings, consults the QA lead, and revises the plan to keep the controls while re-sequencing lower-value tasks instead.

Which AIPGF Core Value is being demonstrated most directly?

  • A. Accountability
  • B. Adaptability
  • C. Transparency
  • D. Sensibility

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: This behaviour reflects Sensibility: using AI-generated suggestions as input, then applying human judgment, context, and experience to reach a practical and responsible project decision. The manager weighs risk appetite and evidence from similar work, and adjusts the AI output to fit real constraints and assurance needs.

Sensibility in the AIPGF Core Values is about treating AI outputs as decision support, not decision authority. In AI-assisted project work, AI can propose plans, forecasts, or content that looks plausible but may be misaligned with the project’s context (risk tolerance, assurance expectations, stakeholder impact, and organisational history). Demonstrating Sensibility means a human evaluates the AI suggestion, brings domain expertise and situational awareness, and then modifies or rejects the output to deliver an outcome that is realistic and responsible.

A good practical pattern is:

  • Use the AI output as a starting point
  • Test it against project context (risk appetite, controls, dependencies)
  • Seek appropriate human review
  • Finalise the deliverable using human judgment

This differs from simply making AI usage visible or tailoring governance structures.

It shows balanced use of AI outputs with human judgment and project context rather than accepting recommendations blindly.


Question 34

Topic: Principles for Responsible and Trustworthy AI

During Activation, a programme allows an AI writing assistant to draft sections of the business case and RAID log updates. The AI Assistance Plan requires storing the final documents in the repository but does not require recording prompts, model/version used, sources provided to the tool, or the named human approver for each AI-assisted change.

Two weeks later, a major cost assumption in the approved business case is found to be wrong, and the steering group requests an immediate review of how the figure was generated and approved.

What is the most likely near-term impact of this transparency omission?

  • A. Benefits realisation will be reduced at Evaluation
  • B. Stakeholders will reject all future automation initiatives
  • C. The programme must pause delivery to retrain the AI model
  • D. The review cannot reliably reconstruct decisions and approvals

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Transparency enables audits and post-incident investigations by making AI-assisted decisions reproducible and attributable. Here, missing records of prompts, sources, model/version, and human approvals means the steering group’s review lacks evidence to trace how the incorrect assumption entered an approved baseline. The immediate consequence is reduced auditability and weakened assurance.

Under the Transparency principle, AI-assisted outputs should be traceable: what was asked (prompts), what inputs/sources were used, which model/version produced the draft, what changed, and which human approved it (HITL). In the scenario, the programme only retained final documents, so when an incident occurs there is no reliable chain of evidence to recreate the decision path or validate that appropriate review happened. This directly undermines audit and review activities and slows or blocks post-incident investigation, increasing near-term risk exposure and eroding confidence in the governance process. The key takeaway is that storing outputs alone is not sufficient; transparency needs decision provenance and approval records.

Without an audit trail (prompts, sources, versions, approvals), the team cannot evidence how the AI-assisted content was produced or validated.


Question 35

Topic: Lifecycle Governance (Initiate to Operate)

A programme is in Stage 2 (Activation) using AI to draft status reports and a draft stakeholder comms pack. In Stage 1 (Foundation), the governance baseline was agreed and recorded in the AI Assistance Plan: approved internal LLM only, prompts stored in a shared repository, and HITL sign-off required before any external release.

Midway through Activation, the team expanded scope to include drafting customer-facing FAQs and switched to a public cloud LLM using a new export of service-desk tickets as source material. Two sprints later, assurance finds: no HITL sign-off evidence, prompts are undocumented, a business owner reports “we don’t trust these outputs,” an unapproved tool is in use, and expected time savings have not materialised.

Which is the most likely underlying cause of this governance breakdown?

  • A. The team failed to document prompts during day-to-day usage
  • B. Key stakeholders are generally sceptical of AI-generated deliverables
  • C. Non-deterministic AI makes benefit targets unreliable regardless of governance
  • D. Stage 1 decisions were not revisited and re-approved after tool/data/scope changes

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: In AIPGF, material changes during Activation—such as switching tools, introducing new data sources, or expanding to external-facing deliverables—should trigger revisiting the Stage 1 governance baseline. Without re-approving updated decision rights, controls, and HITL checkpoints, predictable symptoms appear: unapproved tool use, missing evidence, and reduced trust and benefits.

The core concept is cross-stage governance: Activation work must stay within the Stage 1 (Foundation) decisions captured in artifacts like the AI Assistance Plan, decision log, and risk register. When the team changed all three governance-critical elements (scope to customer-facing outputs, tool to a public LLM, and data to service-desk tickets), they crossed the boundaries of the original approvals and controls.

That kind of change should trigger a targeted “return to Foundation” to:

  • Update the AI Assistance Plan (tooling, data handling, prompt logging, HITL)
  • Reconfirm decision rights/approvers and re-authorise the tool and data use
  • Refresh risks and assurance evidence needed for external release

If this revisit doesn’t happen, the programme lacks enforceable controls and auditability, which then shows up as missing HITL sign-off, undocumented prompts, stakeholder distrust, and benefits shortfall.

A material change in scope, tool, and data should have triggered a return to the Foundation baseline (AI Assistance Plan/decision rights) to update controls and approvals.


Question 36

Topic: AI in Projects and Organizations (Context)

A project team wants to use a public generative AI tool to summarise customer-support tickets into user stories during the Activation stage. The tickets contain names, contact details, and complaint narratives (personal data). The tool’s standard terms state prompts may be processed outside the organisation’s region.

The organisation’s governance requires formal approval before any new third-party processing of personal data.

Which stakeholder action is MOST appropriate to address the primary concern?

  • A. Ask security to penetration-test the AI tool
  • B. Ask end users to validate the generated user stories
  • C. Ask the sponsor to accept the privacy risk
  • D. Engage legal/compliance to approve external PII processing

Best answer: D

What this tests: AI in Projects and Organizations (Context)

Explanation: Because personal data will be processed by an external AI service, the dominant decision context is privacy and lawful/acceptable data handling. AIPGF expects the PM to involve the stakeholder with decision rights for third-party processing of personal data, ensuring appropriate approvals and documented controls before activation.

In AI-assisted projects, the key stakeholder to engage depends on the dominant risk factor. Here, the decisive factor is third-party processing of personal data (including potential cross-region processing), which triggers formal privacy/compliance approval. Under AIPGF, the PM should route this through legal/compliance so decision rights are exercised and evidence is captured before the team uses the tool.

Practical role actions include:

  • Confirm the data category and intended use (summarisation into deliverables)
  • Obtain required approval and conditions (permitted data, safeguards, retention)
  • Record the decision and update the AI Assistance Plan to reflect constraints

Technical security testing or user validation may still be useful, but they do not resolve the approval requirement for personal-data processing.

Legal/compliance is accountable for approving new third-party personal-data processing and required assurances before use.


Question 37

Topic: Principles for Responsible and Trustworthy AI

A project team is introducing an AI assistant to draft status reports and recommend schedule recovery actions. The project manager requires that:

  • the team is trained on the tool’s limitations and appropriate use
  • AI suggestions are treated as advisory, with named humans accountable for final decisions
  • users can challenge or override AI outputs (HITL) to protect stakeholder needs and avoid over-reliance

Which AIPGF principle is most directly being applied?

  • A. Transparency
  • B. Adaptability
  • C. Human-Centricity
  • D. Core value: Efficiency

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The described controls ensure humans remain in control of outcomes: the AI is advisory, people are trained to use it appropriately, and there is explicit human review and override. This is the essence of the Human-Centricity principle in AIPGF—designing and governing AI assistance to enhance human decision-making while keeping accountable roles clearly responsible.

Human-Centricity in AIPGF means AI assistance is governed to preserve human agency and responsibility: people understand how to use the AI, remain accountable for decisions, and can question, override, or stop AI outputs when they do not fit stakeholder needs. In the scenario, training on limitations reduces inappropriate reliance, and a defined HITL approach ensures AI-generated drafts and recommendations are reviewed and approved by accountable humans before use. This keeps the project’s decisions, communications, and impacts aligned with human judgement rather than automated optimisation.

A close-but-different concept is Transparency, which focuses more on explainability and disclosure than on decision ownership and human control.

It prioritises human agency, accountability, and HITL oversight so AI supports—rather than replaces—responsible project decision-making.


Question 38

Topic: Framework Structure and Controls

A project team wants to introduce a generative AI assistant to draft and summarise contract variations. The tool has not yet been used on project data. The sponsor requires strict auditability because internal audit will review how AI-assisted content was produced and approved.

Which action best applies the AIPGF life cycle stages and their intent?

  • A. Complete the Foundation stage by defining AI use, roles/decision rights, risks/controls, and required evidence (e.g., AI Assistance Plan) before any AI-assisted work starts
  • B. Move to the Activation stage by enabling the tool and relying on human-in-the-loop reviews to create the audit trail as deliverables are produced
  • C. Treat this as an Evaluation activity by running an end-of-project lessons-learned workshop focused on AI use and then updating controls
  • D. Wait for the Evaluation stage to assess whether the AI outputs were trustworthy, then decide whether to formalise the governance approach

Best answer: A

What this tests: Framework Structure and Controls

Explanation: Because the AI assistant has not yet been used, AIPGF expects governance to be set up first. The Foundation stage’s intent is to define and agree how AI will be used, who is accountable for approvals, and what controls and evidence are needed. This upfront baseline enables trustworthy, auditable AI-assisted delivery in later stages.

AIPGF separates governance into three life cycle stages so controls are applied at the right time. In this scenario the decisive factor is that AI use is about to begin and strict auditability is required, so the team should be in Foundation.

  • Foundation: agree the purpose and scope of AI assistance, decision rights/accountability, risks and controls, and the evidence needed (e.g., an AI Assistance Plan and decision logging expectations).
  • Activation: operate the agreed controls while producing AI-assisted outputs (e.g., HITL checks, traceability capture, and approvals).
  • Evaluation: assess outcomes and assurance evidence (benefits, incidents, performance/quality) and decide to continue, change, or retire the AI use.

HITL during Activation supports quality, but it should not substitute for the upfront governance baseline set in Foundation.

Foundation establishes the governance baseline and assurance expectations before activating AI use, which is critical when auditability is required.


Question 39

Topic: Lifecycle Governance (Initiate to Operate)

During Foundation, a programme team wants an AI tool to help draft project deliverables (PID sections, supplier evaluation notes, and stakeholder updates). Drafts will include commercially sensitive information and some personal data from stakeholder registers.

Governance constraints set by the organisation:

  • No confidential/personal data may be sent to uncontrolled third-party services
  • Prompts/outputs must be retained for auditability and assurance
  • All AI-generated content must be reviewed and approved by a human before release (HITL)

Which tool-selection approach should the project NOT use?

  • A. Use an enterprise AI service under contract with logging, no training on prompts, and defined retention
  • B. Use a tool that supports redaction plus a documented AI Assistance Plan and human approval workflow
  • C. Use a self-hosted/on-prem model with access controls and centralised prompt/output logging
  • D. Adopt a public, consumer chatbot and let staff paste content, relying on manual copy-editing later

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Tool selection in AIPGF Foundation must satisfy constraints and enable assurance, not just improve speed. Where deliverables contain sensitive or personal data, governance requires controlled services, audit trails, and clear HITL approvals before release. A consumer chatbot used informally is the clearest anti-pattern because it undermines confidentiality and evidencing of AI-assisted outputs.

In the AIPGF Foundation stage, selecting AI tools is a governance decision: the tool must fit the work while meeting constraints for confidentiality, auditability, and accountable approval. Because AI outputs can be non-deterministic, assurance depends on being able to evidence how outputs were produced (prompt/output capture, retention, access control) and to apply Human-in-the-Loop review before anything is issued externally.

A public consumer chatbot used without organisational controls is typically an uncontrolled third-party service: it can create data leakage and makes it hard to retain reliable records for assurance. By contrast, contracted enterprise services, self-hosted deployments, and workflows that include redaction plus an AI Assistance Plan and explicit human approvals are all compatible with the stated constraints.

This breaches the constraint against sending sensitive data to uncontrolled third parties and provides weak auditability for assurance.


Question 40

Topic: AI in Projects and Organizations (Context)

A project team has 2,000 free-text employee survey comments to inform a change management plan. The comments may include personal identifiers. The sponsor wants a fast, consistent output: (1) key themes with representative (anonymised) quotes and (2) a draft narrative for the change plan. The organisation allows AI tools only if an AI Assistance Plan is in place and a Human-in-the-Loop (HITL) approves any AI-assisted content before release.

Which approach best balances effective use of the right AI discipline with proportionate governance controls?

  • A. Use generative AI to draft summaries, with redaction and HITL sign-off
  • B. Train a supervised ML model to predict themes and generate text
  • C. Use keyword rules only and avoid AI to reduce risk
  • D. Use NLP clustering for themes; write the narrative manually

Best answer: A

What this tests: AI in Projects and Organizations (Context)

Explanation: Because the required outputs include drafting a narrative and summarising large volumes of text, generative AI is the most suitable discipline. Using it on anonymised/redacted inputs and keeping a HITL approval step aligns to Human-Centricity and Transparency while delivering efficiency and consistency. Documenting the approach in an AI Assistance Plan makes the controls auditable and repeatable.

The core distinction is task fit: machine learning is typically used to learn patterns for classification or prediction, NLP focuses on analysing and structuring language, and generative AI creates new text (drafts, summaries, rewrites). Here, the sponsor explicitly needs a draft narrative and consistent summaries, which is a generative AI use case.

To keep this ethical and governable in project work, the team should apply proportionate controls already required by the organisation:

  • Redact/anonymise comments before processing to reduce privacy exposure.
  • Use an AI Assistance Plan to define approved data, prompts, and intended use.
  • Keep a HITL reviewer accountable for approving the final narrative and quotes.
  • Record key decisions/assumptions to support transparency and assurance.

NLP-only clustering can help, but it does not meet the “draft narrative” need as efficiently.

Generative AI best fits drafting the narrative and summaries, and redaction plus HITL approval provides proportionate governance for privacy and accountability.

How to interpret your result

  • 85% or higher: you likely understand the foundation governance structure well enough for final review.
  • 70-84%: identify the governance area behind each miss before doing another full set.
  • Below 70%: return to focused AIPGF module pages and rebuild the framework vocabulary first.

For AIPGF Foundation, a weak score often means the role, lifecycle gate, or control purpose is blurred. Fix the pattern before retaking the same public set.

What PM Mastery adds after this diagnostic

This page gives one complete public diagnostic. PM Mastery adds the larger AIPGF Foundation bank, module drills, mixed timed mocks, progress tracking, and explanations for governance roles, stage gates, assurance evidence, and responsible-AI controls.

Retake protocol

Retake only after reviewing every miss and drilling the weakest modules. A fast retake mostly measures answer memory; a delayed retake after focused repair is a better check of governance reasoning.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Focused topic pages

Free review resource

Read the AIPGF Foundation guide on PMExams.com for concept review, then return here for PM Mastery practice.

Revised on Thursday, May 14, 2026