AIPGF Foundation: Framework Structure and Controls

Try 10 focused AIPGF Foundation questions on Framework Structure and Controls, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Foundation
Topic areaFramework Structure and Controls
Blueprint weight13%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Framework Structure and Controls for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Framework Structure and Controls

You are in the Activation stage of a programme using an approved generative AI assistant to draft weekly RAID and steering updates. Today, a draft produced by the AI includes a paragraph that appears to quote confidential supplier pricing from a procurement attachment.

Constraints:

  • AI outputs must be HITL-approved by the workstream lead before any distribution.
  • No Confidential data should be entered into AI prompts; suspected leakage must be treated as an AI incident.
  • The programme has a governance gate next week: “Evidence of incident controls in use and stakeholder notification when required.”

What is the BEST next action?

  • A. Wait until the governance gate to raise it, to avoid undermining stakeholder trust if it is a false alarm.
  • B. Trigger the AI incident process: contain distribution, preserve evidence, report via the defined route, and prepare stakeholder notifications with a rollback to manual drafting.
  • C. Delete the paragraph, regenerate the draft, and circulate it since HITL approval will occur before sending.
  • D. Ask the AI tool vendor to investigate first, then decide whether the programme needs to report it as an incident.

Best answer: B

What this tests: Framework Structure and Controls

Explanation: Because the output suggests confidential procurement information has surfaced, this should be handled as an AI incident rather than a routine edit. The best next action is to contain the potential impact (stop distribution), preserve what happened for auditability, report through the agreed incident route, and implement rollback while planning appropriate stakeholder notification.

Issue/incident controls for AI use are designed to respond quickly to potential harm from non-deterministic outputs (e.g., unexpected disclosure, hallucination presented as fact). In Activation, the priority is to demonstrate that controls work in practice: prompt containment, clear reporting, and accountable decisions with evidence.

In this scenario, the suspected appearance of confidential supplier pricing requires:

  • Containment (stop distribution and prevent further AI use on the same materials)
  • Evidence preservation (save prompts/outputs and context for review)
  • Formal reporting and logging (so decision rights and assurance can operate)
  • Rollback (use a manual process until cleared)
  • Stakeholder notification when required (to maintain transparency and trust)

Simply editing the content or waiting for a gate review bypasses the incident mechanism and undermines auditability.

It applies reporting, containment, evidence preservation, rollback, and stakeholder notification controls required for an AI incident in Activation.


Question 2

Topic: Framework Structure and Controls

A programme is using a generative AI assistant to draft user stories and test cases. The AI Assistance Plan expects a 25% reduction in business analyst hours with no increase in defect leakage.

Exhibit: Benefits Tracker (excerpt)

Benefit ID: B-03  Expected: -25% BA hours; Defect leakage: no increase
Leading indicators tracked weekly
Trigger for review: <15% hours reduction for 2 consecutive sprints OR defect leakage increases

Sprint   Target hours reduction   Actual hours reduction   Defect leakage trend
3        20%                     8%                      +
4        25%                     10%                     +

What should the programme manager do next to best balance ethical, efficient, and effective AI use with required governance controls?

  • A. Replace benefit metrics with qualitative user satisfaction feedback
  • B. Run an Evaluation review using the tracker and adapt usage with approvals
  • C. Expand AI use to more teams to recover benefits faster
  • D. Pause all AI use until an independent audit is completed

Best answer: B

What this tests: Framework Structure and Controls

Explanation: The Benefits Tracker is designed to compare expected value against actuals and trigger timely intervention when benefits fall short. Here, both leading indicators breach the agreed trigger, so governance requires a focused Evaluation review to validate the measurement, understand drivers (e.g., rework, quality impacts), and adapt the AI Assistance Plan with proper approvals and traceability.

Using a Benefits Tracker in AIPGF is an assurance control that makes AI assistance value measurable and auditable, enabling early detection of shortfalls and preventing “AI optimism” from driving unchecked rollout. In this scenario, the agreed trigger is met (two consecutive sprints below the minimum hours-reduction threshold and defect leakage worsening), so the right next step is an Evaluation-stage benefits review rather than scaling or ignoring the signals.

A proportionate response is to:

  • Confirm the baseline and calculation method (Transparency)
  • Identify causes and impacts (e.g., rework from hallucinations, insufficient HITL)
  • Decide to adapt, constrain, or stop the AI use (Adaptability)
  • Update the Benefits Tracker, AI Assistance Plan, and decision log with accountable approvals

This maintains human-centric decision rights while optimizing value and controlling harm.

The tracker indicates a triggered shortfall, so a structured review should validate measures, diagnose causes, and adjust or stop AI use with accountable decision rights recorded.


Question 3

Topic: Framework Structure and Controls

You are in the Activation stage of an AI-assisted programme. The team will use an approved generative AI tool to draft weekly status reports and initial test cases.

Constraints:

  • Data is classified Confidential; prompts must not contain client identifiers.
  • A HITL check is mandatory: workstream leads must review and approve any AI-assisted content before it is shared.
  • The PMO will not pass the upcoming governance gate unless you provide evidence that AI controls are effective, not just that activities occurred.
  • Stakeholders have raised trust concerns after a recent “confident but wrong” AI-generated summary.

What is the BEST next action to prepare the gate evidence pack?

  • A. Upload the AI Assistance Plan and tool-usage training attendance records.
  • B. Request the AI vendor’s latest security certification to demonstrate the tool is safe.
  • C. Proceed with AI use and capture lessons learned for the Evaluation stage.
  • D. Run a control-effectiveness check using samples, and retain the review/audit evidence.

Best answer: D

What this tests: Framework Structure and Controls

Explanation: The gate requires assurance evidence that the controls operate effectively under real use, especially given stakeholder trust concerns. A targeted, sample-based control test can demonstrate that Confidential data is protected and HITL approvals are consistently applied, with auditable records. This directly supports Activation stage readiness to use AI safely.

In AIPGF, activity evidence shows that governance tasks happened (e.g., plans written, training completed), while assurance evidence shows that controls are effective (e.g., tested operation of HITL approvals, data-handling safeguards, and traceability). In the Activation stage, a governance gate typically expects proof the AI-assisted workflow is ready for safe use, not only that it was designed.

The best next step is to perform a short, documented control-effectiveness check, for example:

  • Take a small sample of AI-assisted drafts and verify no client identifiers were used in prompts
  • Confirm HITL approvals occurred before sharing
  • Retain the audit trail (prompt/output references, reviewer sign-off, exceptions and actions)

This provides confidence and accountability, whereas documentation-only artifacts are necessary but insufficient at the gate.

It produces assurance evidence (tested samples, approvals, and traceability) that HITL and data-handling controls actually work in practice.


Question 4

Topic: Framework Structure and Controls

A project team plans to use a generative AI tool to draft and summarise a benefits realisation report. The report will be sent to the steering group and will inform a go/no-go funding decision for the next tranche.

The project manager is asked to “define HITL checkpoints, review responsibilities, and approval thresholds” for this AI-assisted output, but no other governance details are provided.

What should the project manager clarify first?

  • A. What document format the steering group prefers
  • B. Who is accountable to approve the report for decision use
  • C. Which prompt templates the team should standardise on
  • D. How frequently the report will be produced

Best answer: B

What this tests: Framework Structure and Controls

Explanation: HITL controls are defined around decision rights: who is accountable for accepting AI-assisted content, and what level of review is required before it can be relied on. Because this report informs a funding decision, the first clarification is the approver/decision owner for decision use. Only then can appropriate checkpoints and thresholds be set proportionally to the risk.

Human oversight controls (HITL checkpoints, review responsibilities, approval thresholds) must be anchored to decision authority and intended use. In this scenario, the AI-assisted report directly influences a tranche funding decision, so governance needs to specify who is accountable for accepting the content as “decision-grade” and what review/approval must occur before it is shared or used.

Practical first clarifications to enable HITL design include:

  • Who is the accountable approver (e.g., SRO/steering group delegate) for decision use
  • Whether AI output is advisory only or may be relied upon in approvals
  • What approval threshold applies (e.g., mandatory independent review before submission)

Details like templates, formatting, or frequency can be decided later because they do not establish the required accountability and approval gates.

You cannot set HITL reviews and approval thresholds without knowing the decision owner and what constitutes an approved, decision-grade output.


Question 5

Topic: Framework Structure and Controls

A public-sector programme uses a generative AI tool to draft procurement evaluation notes that will influence a high-value contract award. The tool is non-deterministic and stakeholders have stated they need evidence that bias, confidentiality, and hallucination risks have been controlled before the notes are used.

Which AIPGF mapping best fits the practice needed in this situation?

  • A. Human-in-the-Loop content checking by the delivery team
  • B. Foundation stage intent: draft an AI Assistance Plan for future work
  • C. Transparency principle: publish how AI was used to all bidders
  • D. Independent assurance review of controls and evidence

Best answer: D

What this tests: Framework Structure and Controls

Explanation: Because the AI-assisted output will influence a major contract decision and stakeholders require demonstrable confidence, assurance must be proportionate to the potential impact. An independent assurance review provides objective scrutiny of the controls, evidence, and residual risk, beyond day-to-day team checks. This aligns governance with stakeholder expectations for trust and accountability.

In AIPGF, the trigger for independent assurance is not “AI was used” in general, but the combination of risk/impact and stakeholder expectations. When AI-assisted outputs could materially affect high-stakes decisions (e.g., procurement outcomes, safety, financial commitments) and there is explicit demand for defensible evidence, assurance should be performed independently of the delivery team to avoid conflicts of interest and to strengthen credibility.

Independent assurance typically focuses on whether:

  • appropriate controls were designed and followed (e.g., confidentiality, bias, hallucination mitigations)
  • evidence is sufficient and auditable (traceability to sources, review records, decision logs)
  • residual risks are understood and accepted by the right accountability holders

Team-based HITL checks and transparency activities can still be necessary, but they do not replace independent assurance when scrutiny and consequences are high.

High-impact decisions with heightened stakeholder scrutiny warrant an independent review separate from the delivery team.


Question 6

Topic: Framework Structure and Controls

A programme team plans to use generative AI to draft status reports and risk narratives. The sponsor asks for a simple description of the AIPGF life cycle stages and their intent.

Which statement is INCORRECT?

  • A. Foundation establishes the need, boundaries, and governance for AI use.
  • B. Evaluation checks whether AI use is delivering outcomes and remains acceptable.
  • C. Evaluation is where the AI approach is selected and the AI Assistance Plan is created.
  • D. Activation puts the agreed AI use into practice with controls and oversight.

Best answer: C

What this tests: Framework Structure and Controls

Explanation: In AIPGF, Foundation defines the intended AI use and the governance to manage it before adoption. Activation is when the AI use is implemented with the agreed controls (including HITL where needed). Evaluation then assesses whether AI-assisted outputs remain effective, trustworthy, and within tolerance over time.

The AIPGF life cycle separates “decide and design” from “do” and “check.” In the scenario, describing the stages correctly helps the sponsor understand when key decisions and assurance happen:

  • Foundation: clarify the AI use case, boundaries, roles/decision rights, and define controls and artifacts such as an AI Assistance Plan.
  • Activation: implement the AI-assisted way of working, embed controls (e.g., approvals/HITL, logging, data handling), and enable the team.
  • Evaluation: review outcomes and impacts, confirm controls remain effective, and decide whether to continue, adapt, scale, or stop the AI use.

Confusing Foundation and Evaluation undermines governance by pushing upfront decisions into a later stage.

Selecting the approach and creating the AI Assistance Plan are Foundation activities, while Evaluation focuses on performance, impacts, and ongoing acceptability.


Question 7

Topic: Framework Structure and Controls

During the Foundation stage of a programme, the PMO introduces AI tools to help draft stakeholder emails and a first version of the benefits case. An initial AI Assistance Plan is circulated with objectives and scope, plus a general statement that “humans will review outputs,” but it does not specify approved AI tools or data handling rules (what data may be entered, where prompts/outputs are stored, and who can access them).

A team starts using different public AI assistants and copies meeting notes into prompts to “speed things up.” The sponsor asks how the programme will evidence appropriate control and assurance.

What is the most likely near-term impact of this omission?

  • A. The programme will be unable to adapt to future policy changes
  • B. Model drift will be detected only after Activation monitoring begins
  • C. Benefits tracking will fail at the next quarterly portfolio review
  • D. Immediate confidentiality and auditability risks from uncontrolled tool use

Best answer: D

What this tests: Framework Structure and Controls

Explanation: An AI Assistance Plan must define approved tools and clear data rules so AI use is controlled, traceable, and aligned to the organisation’s risk appetite. Without those controls, people will predictably use inconsistent tools and paste uncontrolled information into prompts. The immediate consequence is higher confidentiality/IP exposure and weaker auditability, undermining stakeholder trust and assurance.

The AI Assistance Plan is a core governance artifact used to make AI-assisted work controllable and defensible. In this scenario, the plan lacks two critical control elements: (1) which tools are approved and (2) data handling rules for prompts and outputs (allowed data, storage, access, retention). When these are missing, teams tend to improvise with public tools and ungoverned inputs, creating immediate exposure to confidentiality/privacy/IP risks and making it hard to evidence what was submitted, what was generated, and who approved it.

A HITL statement alone supports Human-Centricity, but it does not provide Transparency or auditability without defined tooling and information controls. The closest traps describe downstream effects that might occur later, but the most immediate impact is uncontrolled risk and reduced assurance.

Without approved tools and data rules, staff will use ad hoc services and inputs, increasing near-term leakage and reducing traceability.


Question 8

Topic: Framework Structure and Controls

In the APMG AI Project Governance Framework (AIPGF), which definition best describes the Evaluation stage?

  • A. Review AI use and evidence, then communicate assurance findings and required actions
  • B. Measure an organisation’s AI governance maturity using capability levels
  • C. Execute AI-assisted work under agreed controls and Human-in-the-Loop checks
  • D. Design governance, roles, and the initial AI Assistance Plan before any AI use

Best answer: A

What this tests: Framework Structure and Controls

Explanation: The Evaluation stage is where the project looks back at how AI was used, examines the collected evidence, and turns that into assurance conclusions. It emphasises clear communication of findings, impacts, and agreed remedial actions so stakeholders can make informed decisions and maintain trust.

In AIPGF, the Evaluation stage is the point in the AI governance life cycle where assurance is consolidated and communicated. The team reviews what AI was actually used for, evaluates the quality and completeness of evidence (for example, approvals, decision logs, and checks performed), and then reports assurance findings in clear project language. The output should highlight what is acceptable, what is not acceptable, and what actions are required (including owners and timing) so governance bodies and stakeholders can decide whether to continue, adjust controls, or stop specific AI uses. This contrasts with earlier stages that set up governance (Foundation) or run the controlled AI-assisted work (Activation).

Evaluation focuses on assessing evidence from AI-assisted work and reporting findings and follow-up actions in project terms.


Question 9

Topic: Framework Structure and Controls

You are delivering a programme where an AI assistant drafts requirements summaries and weekly status reports. The assurance lead asks for delivery metrics that evidence both AI effectiveness and AI risk control.

Exhibit: AI Assistance Plan (monitoring excerpt)

Weekly measures (current):
- Estimated hours saved from AI drafting
- % of team using AI assistant
- Stakeholder satisfaction pulse score
- Count of AI-related incidents logged
Note: Two defects caught in review last sprint (hallucinated dependency; wrong cost).

Which additional metric would best strengthen assurance evidence during delivery?

  • A. Average AI response time for draft generation
  • B. First-pass HITL approval rate with defect category trends
  • C. Total tokens consumed per week by the team
  • D. Number of users granted access to the AI assistant

Best answer: B

What this tests: Framework Structure and Controls

Explanation: Assurance needs evidence that AI-assisted outputs are being checked effectively and that key risks (e.g., hallucination, confidentiality errors) are being controlled in practice. A first-pass HITL approval rate, paired with defect categories and trends, links governance controls to observable review outcomes. This provides actionable monitoring for both effectiveness and risk during delivery.

For AI-assisted project deliverables, delivery-stage assurance should prioritise metrics that evidence quality checks and how well governance controls are working in practice. The exhibit already tracks adoption and a lagging incident count, but it does not systematically capture what reviewers are finding (or preventing) before outputs are released.

A strong metric set typically includes:

  • HITL review outcomes (pass/rework rates)
  • Defect taxonomy (e.g., hallucination, bias, confidentiality/IP)
  • Trends over time to detect degradation and target controls

Tracking first-pass HITL approval rate with defect categories turns ad hoc review findings into auditable evidence aligned to Transparency and Human-Centricity, and it supports timely corrective action (prompt guidance, additional checks, or tighter approvals).

It directly measures output quality and risk by showing review outcomes and the types of AI failures being prevented before release.


Question 10

Topic: Framework Structure and Controls

During the AIPGF Activation stage, a programme team uses a generative AI assistant to draft weekly status reports and update the RAID log. The team agrees a human will edit the final text (HITL), but they do not create the AIPGF deliverable that records AI use (e.g., prompts/outputs, decision rationale, and who approved AI-assisted changes).

A month later, the sponsor challenges why a key risk was downgraded and asks for evidence showing how the change was made.

What is the most likely near-term impact of this omission?

  • A. Reduced auditability and stakeholder trust due to an incomplete decision trail
  • B. Failure to realise programme benefits because reporting automation will stop at closure
  • C. Increased need for model retraining because performance will degrade over time
  • D. Immediate confidentiality breach because AI tools always retain submitted data

Best answer: A

What this tests: Framework Structure and Controls

Explanation: The missing AIPGF deliverable breaks the link between AI-assisted work and accountable approval, so the team cannot show a traceable rationale for changes. That directly undermines the Transparency principle and the framework aim of maintaining an auditable record of AI influence on project decisions. The sponsor’s challenge becomes hard to resolve quickly and confidently.

AIPGF elements are designed to work together: Principles (especially Transparency and Human-Centricity) are achieved through Activities (HITL review, approval, assurance) and evidenced by Deliverables (such as an AI Assistance Plan and AI decision log). In this scenario, the team performed a form of HITL editing but omitted the Deliverable that makes AI use and decision-making traceable.

Near term, the consequence is practical and immediate: when a stakeholder challenges an outcome, the team cannot demonstrate what the AI produced, what the human changed, and who approved the final decision. That weakens auditability, increases dispute time, and erodes trust—exactly what the Transparency principle is meant to protect. Longer-term issues like model drift or end-of-programme benefits are not the primary, immediate impact described by the facts.

Without an AI use record/decision log, the team cannot evidence how AI-assisted updates were made or approved, weakening Transparency and the audit trail.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026