Try 10 focused AIPGF Foundation questions on Framework Structure and Controls, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | AIPGF Foundation |
| Topic area | Framework Structure and Controls |
| Blueprint weight | 13% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Framework Structure and Controls for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Framework Structure and Controls
You are in the Activation stage of a programme using an approved generative AI assistant to draft weekly RAID and steering updates. Today, a draft produced by the AI includes a paragraph that appears to quote confidential supplier pricing from a procurement attachment.
Constraints:
What is the BEST next action?
Best answer: B
What this tests: Framework Structure and Controls
Explanation: Because the output suggests confidential procurement information has surfaced, this should be handled as an AI incident rather than a routine edit. The best next action is to contain the potential impact (stop distribution), preserve what happened for auditability, report through the agreed incident route, and implement rollback while planning appropriate stakeholder notification.
Issue/incident controls for AI use are designed to respond quickly to potential harm from non-deterministic outputs (e.g., unexpected disclosure, hallucination presented as fact). In Activation, the priority is to demonstrate that controls work in practice: prompt containment, clear reporting, and accountable decisions with evidence.
In this scenario, the suspected appearance of confidential supplier pricing requires:
Simply editing the content or waiting for a gate review bypasses the incident mechanism and undermines auditability.
It applies reporting, containment, evidence preservation, rollback, and stakeholder notification controls required for an AI incident in Activation.
Topic: Framework Structure and Controls
A programme is using a generative AI assistant to draft user stories and test cases. The AI Assistance Plan expects a 25% reduction in business analyst hours with no increase in defect leakage.
Exhibit: Benefits Tracker (excerpt)
Benefit ID: B-03 Expected: -25% BA hours; Defect leakage: no increase
Leading indicators tracked weekly
Trigger for review: <15% hours reduction for 2 consecutive sprints OR defect leakage increases
Sprint Target hours reduction Actual hours reduction Defect leakage trend
3 20% 8% +
4 25% 10% +
What should the programme manager do next to best balance ethical, efficient, and effective AI use with required governance controls?
Best answer: B
What this tests: Framework Structure and Controls
Explanation: The Benefits Tracker is designed to compare expected value against actuals and trigger timely intervention when benefits fall short. Here, both leading indicators breach the agreed trigger, so governance requires a focused Evaluation review to validate the measurement, understand drivers (e.g., rework, quality impacts), and adapt the AI Assistance Plan with proper approvals and traceability.
Using a Benefits Tracker in AIPGF is an assurance control that makes AI assistance value measurable and auditable, enabling early detection of shortfalls and preventing “AI optimism” from driving unchecked rollout. In this scenario, the agreed trigger is met (two consecutive sprints below the minimum hours-reduction threshold and defect leakage worsening), so the right next step is an Evaluation-stage benefits review rather than scaling or ignoring the signals.
A proportionate response is to:
This maintains human-centric decision rights while optimizing value and controlling harm.
The tracker indicates a triggered shortfall, so a structured review should validate measures, diagnose causes, and adjust or stop AI use with accountable decision rights recorded.
Topic: Framework Structure and Controls
You are in the Activation stage of an AI-assisted programme. The team will use an approved generative AI tool to draft weekly status reports and initial test cases.
Constraints:
What is the BEST next action to prepare the gate evidence pack?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: The gate requires assurance evidence that the controls operate effectively under real use, especially given stakeholder trust concerns. A targeted, sample-based control test can demonstrate that Confidential data is protected and HITL approvals are consistently applied, with auditable records. This directly supports Activation stage readiness to use AI safely.
In AIPGF, activity evidence shows that governance tasks happened (e.g., plans written, training completed), while assurance evidence shows that controls are effective (e.g., tested operation of HITL approvals, data-handling safeguards, and traceability). In the Activation stage, a governance gate typically expects proof the AI-assisted workflow is ready for safe use, not only that it was designed.
The best next step is to perform a short, documented control-effectiveness check, for example:
This provides confidence and accountability, whereas documentation-only artifacts are necessary but insufficient at the gate.
It produces assurance evidence (tested samples, approvals, and traceability) that HITL and data-handling controls actually work in practice.
Topic: Framework Structure and Controls
A project team plans to use a generative AI tool to draft and summarise a benefits realisation report. The report will be sent to the steering group and will inform a go/no-go funding decision for the next tranche.
The project manager is asked to “define HITL checkpoints, review responsibilities, and approval thresholds” for this AI-assisted output, but no other governance details are provided.
What should the project manager clarify first?
Best answer: B
What this tests: Framework Structure and Controls
Explanation: HITL controls are defined around decision rights: who is accountable for accepting AI-assisted content, and what level of review is required before it can be relied on. Because this report informs a funding decision, the first clarification is the approver/decision owner for decision use. Only then can appropriate checkpoints and thresholds be set proportionally to the risk.
Human oversight controls (HITL checkpoints, review responsibilities, approval thresholds) must be anchored to decision authority and intended use. In this scenario, the AI-assisted report directly influences a tranche funding decision, so governance needs to specify who is accountable for accepting the content as “decision-grade” and what review/approval must occur before it is shared or used.
Practical first clarifications to enable HITL design include:
Details like templates, formatting, or frequency can be decided later because they do not establish the required accountability and approval gates.
You cannot set HITL reviews and approval thresholds without knowing the decision owner and what constitutes an approved, decision-grade output.
Topic: Framework Structure and Controls
A public-sector programme uses a generative AI tool to draft procurement evaluation notes that will influence a high-value contract award. The tool is non-deterministic and stakeholders have stated they need evidence that bias, confidentiality, and hallucination risks have been controlled before the notes are used.
Which AIPGF mapping best fits the practice needed in this situation?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: Because the AI-assisted output will influence a major contract decision and stakeholders require demonstrable confidence, assurance must be proportionate to the potential impact. An independent assurance review provides objective scrutiny of the controls, evidence, and residual risk, beyond day-to-day team checks. This aligns governance with stakeholder expectations for trust and accountability.
In AIPGF, the trigger for independent assurance is not “AI was used” in general, but the combination of risk/impact and stakeholder expectations. When AI-assisted outputs could materially affect high-stakes decisions (e.g., procurement outcomes, safety, financial commitments) and there is explicit demand for defensible evidence, assurance should be performed independently of the delivery team to avoid conflicts of interest and to strengthen credibility.
Independent assurance typically focuses on whether:
Team-based HITL checks and transparency activities can still be necessary, but they do not replace independent assurance when scrutiny and consequences are high.
High-impact decisions with heightened stakeholder scrutiny warrant an independent review separate from the delivery team.
Topic: Framework Structure and Controls
A programme team plans to use generative AI to draft status reports and risk narratives. The sponsor asks for a simple description of the AIPGF life cycle stages and their intent.
Which statement is INCORRECT?
Best answer: C
What this tests: Framework Structure and Controls
Explanation: In AIPGF, Foundation defines the intended AI use and the governance to manage it before adoption. Activation is when the AI use is implemented with the agreed controls (including HITL where needed). Evaluation then assesses whether AI-assisted outputs remain effective, trustworthy, and within tolerance over time.
The AIPGF life cycle separates “decide and design” from “do” and “check.” In the scenario, describing the stages correctly helps the sponsor understand when key decisions and assurance happen:
Confusing Foundation and Evaluation undermines governance by pushing upfront decisions into a later stage.
Selecting the approach and creating the AI Assistance Plan are Foundation activities, while Evaluation focuses on performance, impacts, and ongoing acceptability.
Topic: Framework Structure and Controls
During the Foundation stage of a programme, the PMO introduces AI tools to help draft stakeholder emails and a first version of the benefits case. An initial AI Assistance Plan is circulated with objectives and scope, plus a general statement that “humans will review outputs,” but it does not specify approved AI tools or data handling rules (what data may be entered, where prompts/outputs are stored, and who can access them).
A team starts using different public AI assistants and copies meeting notes into prompts to “speed things up.” The sponsor asks how the programme will evidence appropriate control and assurance.
What is the most likely near-term impact of this omission?
Best answer: D
What this tests: Framework Structure and Controls
Explanation: An AI Assistance Plan must define approved tools and clear data rules so AI use is controlled, traceable, and aligned to the organisation’s risk appetite. Without those controls, people will predictably use inconsistent tools and paste uncontrolled information into prompts. The immediate consequence is higher confidentiality/IP exposure and weaker auditability, undermining stakeholder trust and assurance.
The AI Assistance Plan is a core governance artifact used to make AI-assisted work controllable and defensible. In this scenario, the plan lacks two critical control elements: (1) which tools are approved and (2) data handling rules for prompts and outputs (allowed data, storage, access, retention). When these are missing, teams tend to improvise with public tools and ungoverned inputs, creating immediate exposure to confidentiality/privacy/IP risks and making it hard to evidence what was submitted, what was generated, and who approved it.
A HITL statement alone supports Human-Centricity, but it does not provide Transparency or auditability without defined tooling and information controls. The closest traps describe downstream effects that might occur later, but the most immediate impact is uncontrolled risk and reduced assurance.
Without approved tools and data rules, staff will use ad hoc services and inputs, increasing near-term leakage and reducing traceability.
Topic: Framework Structure and Controls
In the APMG AI Project Governance Framework (AIPGF), which definition best describes the Evaluation stage?
Best answer: A
What this tests: Framework Structure and Controls
Explanation: The Evaluation stage is where the project looks back at how AI was used, examines the collected evidence, and turns that into assurance conclusions. It emphasises clear communication of findings, impacts, and agreed remedial actions so stakeholders can make informed decisions and maintain trust.
In AIPGF, the Evaluation stage is the point in the AI governance life cycle where assurance is consolidated and communicated. The team reviews what AI was actually used for, evaluates the quality and completeness of evidence (for example, approvals, decision logs, and checks performed), and then reports assurance findings in clear project language. The output should highlight what is acceptable, what is not acceptable, and what actions are required (including owners and timing) so governance bodies and stakeholders can decide whether to continue, adjust controls, or stop specific AI uses. This contrasts with earlier stages that set up governance (Foundation) or run the controlled AI-assisted work (Activation).
Evaluation focuses on assessing evidence from AI-assisted work and reporting findings and follow-up actions in project terms.
Topic: Framework Structure and Controls
You are delivering a programme where an AI assistant drafts requirements summaries and weekly status reports. The assurance lead asks for delivery metrics that evidence both AI effectiveness and AI risk control.
Exhibit: AI Assistance Plan (monitoring excerpt)
Weekly measures (current):
- Estimated hours saved from AI drafting
- % of team using AI assistant
- Stakeholder satisfaction pulse score
- Count of AI-related incidents logged
Note: Two defects caught in review last sprint (hallucinated dependency; wrong cost).
Which additional metric would best strengthen assurance evidence during delivery?
Best answer: B
What this tests: Framework Structure and Controls
Explanation: Assurance needs evidence that AI-assisted outputs are being checked effectively and that key risks (e.g., hallucination, confidentiality errors) are being controlled in practice. A first-pass HITL approval rate, paired with defect categories and trends, links governance controls to observable review outcomes. This provides actionable monitoring for both effectiveness and risk during delivery.
For AI-assisted project deliverables, delivery-stage assurance should prioritise metrics that evidence quality checks and how well governance controls are working in practice. The exhibit already tracks adoption and a lagging incident count, but it does not systematically capture what reviewers are finding (or preventing) before outputs are released.
A strong metric set typically includes:
Tracking first-pass HITL approval rate with defect categories turns ad hoc review findings into auditable evidence aligned to Transparency and Human-Centricity, and it supports timely corrective action (prompt guidance, additional checks, or tighter approvals).
It directly measures output quality and risk by showing review outcomes and the types of AI failures being prevented before release.
Topic: Framework Structure and Controls
During the AIPGF Activation stage, a programme team uses a generative AI assistant to draft weekly status reports and update the RAID log. The team agrees a human will edit the final text (HITL), but they do not create the AIPGF deliverable that records AI use (e.g., prompts/outputs, decision rationale, and who approved AI-assisted changes).
A month later, the sponsor challenges why a key risk was downgraded and asks for evidence showing how the change was made.
What is the most likely near-term impact of this omission?
Best answer: A
What this tests: Framework Structure and Controls
Explanation: The missing AIPGF deliverable breaks the link between AI-assisted work and accountable approval, so the team cannot show a traceable rationale for changes. That directly undermines the Transparency principle and the framework aim of maintaining an auditable record of AI influence on project decisions. The sponsor’s challenge becomes hard to resolve quickly and confidently.
AIPGF elements are designed to work together: Principles (especially Transparency and Human-Centricity) are achieved through Activities (HITL review, approval, assurance) and evidenced by Deliverables (such as an AI Assistance Plan and AI decision log). In this scenario, the team performed a form of HITL editing but omitted the Deliverable that makes AI use and decision-making traceable.
Near term, the consequence is practical and immediate: when a stakeholder challenges an outcome, the team cannot demonstrate what the AI produced, what the human changed, and who approved the final decision. That weakens auditability, increases dispute time, and erodes trust—exactly what the Transparency principle is meant to protect. Longer-term issues like model drift or end-of-programme benefits are not the primary, immediate impact described by the facts.
Without an AI use record/decision log, the team cannot evidence how AI-assisted updates were made or approved, weakening Transparency and the audit trail.
Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.