Practice APMG AIPGF Foundation with free sample questions, timed mock exams, and detailed explanations for governance, delivery, and change decisions.
AIPGF Foundation (AIPGF-F) is APMG International’s foundation-level certification for AI Project Governance Framework knowledge. If you are searching for AIPGF Foundation sample exam questions, practice tests, or an exam simulator, this is the main PM Mastery page to start on web, then continue on iOS or Android with the same account.
Start a practice session for APMG AI Project Governance Framework (AIPGF) Foundation below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.
Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.
Because the real exam is short, pacing matters. The fastest gains usually come from removing hesitation on governance roles, lifecycle controls, and responsible-AI principles.
| Topic | Weight | Estimated questions |
|---|---|---|
| Module 1: Foundations of AI Project Governance (AIPGF) | 12% | 5 |
| Module 2: AI in Projects and Organizations (Context) | 12% | 5 |
| Module 3: Framework Structure and Controls | 13% | 5 |
| Module 4: Roles, Responsibilities, and Accountabilities | 13% | 5 |
| Module 5: Principles for Responsible and Trustworthy AI | 13% | 5 |
| Module 6: Values, Behaviours, and Culture | 12% | 5 |
| Module 7: Lifecycle Governance (Initiate to Operate) | 13% | 5 |
| Module 8: Assurance, Metrics, and Continuous Improvement | 12% | 5 |
If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .
These sample questions cover multiple blueprint areas for AIPGF Foundation. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A project uses a vendor-provided generative AI tool to draft status reports and risk summaries. Recently, several AI-written deliverables were issued without HITL sign-off, prompts are not being recorded, and the tool was enabled by the vendor for a pilot without going through project approvals. When stakeholders challenged a hallucinated risk statement, the project team escalated to Procurement, Procurement escalated to IT, and IT said the vendor “owns the tool,” but no one could name an accountable approver or a vendor escalation contact. Trust has dropped and expected benefits are not being realised.
What is the MOST likely underlying cause of this AI governance breakdown?
Best answer: B
Explanation: The key diagnostic clue is the escalation “ping-pong” and lack of any named accountable approver or vendor contact. That points to a structural gap: third-party responsibilities, decision rights, and escalation paths were never defined and embedded in project governance. When these are undocumented, other controls (like HITL sign-off and prompt logging) are unlikely to be consistently implemented or enforced.
Topic: Module 6: Values, Behaviours, and Culture
You are in the Foundation stage of a programme using an AI assistant to draft stakeholder communications and requirements summaries.
Constraints:
AI Assistance Plan and evidence of stakeholder buy-in.Several business analysts are resisting the AI assistant, citing “black box answers,” fear of being judged on AI-written work, and concern about accidental data leakage. What is the BEST next action?
Best answer: A
Explanation: In Foundation, the priority is to understand and address resistance before scaling use. A facilitated session to elicit concerns (trust/opacity, fear of evaluation or job impact, and confidentiality worries) supports Human-Centricity and Transparency. Converting those concerns into a controlled HITL pilot and updates to the AI Assistance Plan creates the buy-in evidence needed for the Activation gate.
Topic: Module 6: Values, Behaviours, and Culture
A project team wants to paste a full set of customer support transcripts into an external generative AI tool “to see what insights we can discover.” The transcripts include personal data and commercially sensitive information. The project manager stops the experiment, requires redaction/minimisation, and only allows testing using an approved environment documented in the AI Assistance Plan.
Which AIPGF Core Value is primarily being applied to resolve this curiosity-versus-risk conflict?
Best answer: D
Explanation: The team is balancing curiosity about AI insights against the risk of exposing personal and sensitive information. In AIPGF terms, the primary driver for stopping and redesigning the experiment is the core value of protecting privacy/confidentiality through minimisation, controlled environments, and documented approval. This resolves the value conflict responsibly while still enabling learning.
Topic: Module 6: Values, Behaviours, and Culture
During the Activation stage, a programme team uses a generative AI tool to summarise recorded customer interviews and draft requirements.
In a governance discussion, a business representative raises an ethical concern: the recordings may contain sensitive data and customers did not explicitly consent to AI processing. The sponsor verbally agrees to continue “to keep pace,” but the decision is not documented, and the AI Assistance Plan and decision log are not updated.
What is the most likely near-term impact of this omission?
Best answer: B
Explanation: Failing to document the ethical discussion and decision undermines Transparency and weakens auditability. The most immediate consequence is typically an assurance or governance challenge: the team cannot show who approved what, on what basis, and with what controls (HITL), so work may be paused until this is corrected.
Topic: Module 3: Framework Structure and Controls
In the APMG AI Project Governance Framework (AIPGF), what best describes the purpose of the Foundation stage?
Best answer: D
Explanation: AIPGF supports ethical, efficient, and effective AI assistance by structuring governance across its life cycle stages. The Foundation stage is where the project sets up the governance and assurance baseline—clarifying how AI may be used, who approves AI-assisted outputs, and what controls and evidence are required—before AI use is scaled.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
During Activation, a project team uses a third-party generative AI web tool to summarise stakeholder interview transcripts that include personal data. Organizational policy requires Procurement and the Data Protection Officer to approve any third-party AI tool before use, and to retain evidence (contract/DPA and usage logs) for audit.
An assurance reviewer asks for the vendor approval record, DPA, and prompt/output log extracts. The team has none.
What is the most likely near-term impact of this governance omission?
Best answer: B
Explanation: Using an unapproved third-party AI tool for personal data creates an immediate policy-compliance and evidence problem. When assurance requests the approval record, DPA, and logs, the team cannot demonstrate compliant processing or reconstruct what data was shared and how outputs were produced. The near-term consequence is reduced auditability and increased risk exposure, often leading to an immediate stop-and-remediate action.
Topic: Module 5: Principles for Responsible and Trustworthy AI
A programme team is short of business analysts. The project manager proposes using a generative AI tool to draft user stories, acceptance criteria, and change-impact summaries. The sponsor suggests “let the AI do the analysis” and redeploy the analysts elsewhere. The tool is already on the organisation’s approved list and an initial screening rated this use as Medium risk.
To apply the Human-Centricity principle and ensure AI assistance augments capability rather than replacing necessary expertise, what should the project manager verify/ask FIRST?
Best answer: D
Explanation: Human-Centricity in AI-assisted project work means humans remain accountable and competent for decisions, with AI used to support—not substitute—professional judgement. The immediate governance gap in the scenario is the implied removal of analyst expertise, so the first clarification is who will perform HITL review and approval, and whether they have the right skills to validate the drafts.
Topic: Module 5: Principles for Responsible and Trustworthy AI
A programme team uses a generative AI assistant to draft risk responses and safety recommendations for a public-facing infrastructure project. Before anything is added to the risk register or shared externally, the sponsor requires a qualified engineer and the risk manager to review the draft, correct errors or bias, and explicitly approve the final text (HITL).
Which AIPGF principle is this practice primarily demonstrating?
Best answer: A
Explanation: This practice is about maintaining human oversight and decision-making authority over AI-assisted outputs. By requiring competent reviewers to validate, correct, and approve recommendations, the team reduces the risk of hallucinations, bias, and unsafe guidance being accepted as-is. That is the essence of the Human-Centricity principle in AIPGF.
Topic: Module 5: Principles for Responsible and Trustworthy AI
A project team used a generative AI assistant to draft parts of a customer-facing “as-is/to-be” process pack. Two months later, a major defect is found and the sponsor asks for a post-incident investigation: what the AI produced, what sources it used, and who approved the final content. The team cannot immediately show what AI assistance was used.
Which question/request should you make FIRST to apply the Transparency principle and enable an effective audit?
Best answer: C
Explanation: Transparency in AIPGF is about making AI assistance explainable and auditable through traceability. Before debating fixes or tool changes, you need the records that reconstruct what happened: the agreed scope of AI use and the evidence trail of prompts/inputs, outputs, versions, and human approvals. That enables audits, reviews, and post-incident investigations based on facts rather than assumptions.
Topic: Module 8: Assurance, Metrics, and Continuous Improvement
A programme is using a generative AI assistant to draft weekly project status reports. The AI Assistance Plan (created in Foundation and operationalised in Activation) requires a monthly review of AI usage.
The first periodic review finds:
What is the best next step in the AIPGF flow?
Best answer: D
Explanation: A periodic audit only improves assurance if findings are converted into owned actions and verified closure. Here, gaps in HITL, privacy handling, and auditability require updating the AI Assistance Plan and related controls, then scheduling a follow-up review to confirm the fixes work in practice. This aligns with continuous improvement at scale in the Evaluation loop.
Topic: Module 3: Framework Structure and Controls
During the Evaluation stage of an AI-assisted programme, the assurance reviewer tests evidence for an AI tool used to draft stakeholder communications. They find the team cannot show an approved AI Assistance Plan or HITL review records for externally shared outputs.
The reviewer emails the Steering Committee the following note (excerpt):
Finding: Control weakness in AI governance
Details: Missing artefacts and insufficient traceability
Recommendation: Improve assurance posture and model controls
Appendix: 6 pages of technical observations
No required actions, owners, or decision request are stated. What is the most likely near-term impact?
Best answer: B
Explanation: Assurance findings need to be communicated as clear, decision-ready project language: what is wrong, the risk, the required action, who owns it, and by when. Here, the message is technical and non-actionable, so the Steering Committee cannot readily approve next steps or hold anyone accountable. The immediate consequence is decision friction and continued exposure from unresolved evidence gaps.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A vendor is contracted to produce an AI-assisted benefits map and draft business case for your programme using their own generative AI tooling. Internal assurance has stated that, during Evaluation, they must be able to trace what AI was used, what inputs were provided, who reviewed the outputs, and what was approved.
Which action should you AVOID to maintain auditability and traceability of the vendor’s AI-assisted deliverables?
Best answer: C
Explanation: Auditability with third parties requires a verifiable trail from inputs and AI use through human review to approvals, retained in accessible records. If the vendor keeps only final outputs and discards the AI interaction history, assurance cannot test provenance, decision points, or accountability. This is a governance anti-pattern in AI-assisted delivery.
Topic: Module 7: Lifecycle Governance (Initiate to Operate)
A project has used an AI assistant to draft requirements, status reports, and risk narratives. The solution is now being handed over to operations, and the sponsor has asked for evidence that AI-assisted outputs were appropriately governed and that benefits and issues are captured before the project is closed.
Which action is most aligned with the intent of AIPGF Stage 3 (Evaluation) and how it maps to project closure phases?
Best answer: C
Explanation: AIPGF Stage 3 (Evaluation) is about closing the loop: verifying that AI-assisted work delivered intended benefits, stayed within agreed controls, and generating auditable learning for future use. In project closure terms, it maps to post-implementation review/close-out activities, including evidence capture, lessons learned, and a clear handover of ongoing monitoring and ownership to operations.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
You are procuring an external AI-enabled documentation service to help draft project deliverables. Before onboarding, you review the AI Assistance Plan excerpt below.
Exhibit: AI Assistance Plan (excerpt)
AI use: Draft requirements & stakeholder comms
External party: DocuGen Services (vendor)
Requested access: SharePoint /Projects (read/write)
Data types: Contracts, bids, customer feedback
Data classification: Mixed (Confidential + Personal)
Approver listed: Project Manager
Access duration: Not specified
What is the best next action to align with least privilege and clear approvals?
Best answer: C
Explanation: The exhibit shows a broad read/write request to a mixed-classification repository, with only the project manager listed as approver and no access duration. Least privilege requires scoping data and permissions to the minimum necessary and time-boxing access. Clear approvals require the appropriate data owner and security/privacy authorities to approve the sharing arrangement before access is granted.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
A supplier is contracted to draft a set of user requirements and a cost estimate using generative AI. The programme assurance lead will only accept the deliverables if they are auditable later (who used what tool, what inputs were provided, what outputs were accepted, and when).
Which artifact/evidence best validates that auditability and traceability controls are ready for this third-party contribution?
Best answer: A
Explanation: For third-party AI-assisted deliverables, auditability depends on agreed, verifiable records of how outputs were produced and approved. A signed AI Assistance Plan addendum can mandate prompt/input capture, versioning, decision/approval evidence, and retention/access rules that enable later audit and traceability. This provides readiness evidence before deliverables are accepted.
Topic: Module 3: Framework Structure and Controls
In the APMG AI Project Governance Framework (AIPGF), what does a Human-in-the-Loop (HITL) control mean when governing AI-assisted outputs?
Best answer: D
Explanation: HITL in AIPGF is about purposeful human oversight of AI-assisted work products. It establishes checkpoints and assigns accountable reviewers/approvers who decide whether an AI-assisted output is acceptable for its intended use. This includes clarifying who can approve and when escalation is required.
Topic: Module 2: AI in Projects and Organizations (Context)
A programme team uses a generative AI tool to draft weekly status reports and update the RAID log. In Activation, they skip the AI Assistance Plan items for: recording prompts/outputs, capturing model version, and assigning a Human-in-the-Loop approver.
Two weeks later, the tool produces different wording and risk ratings for the same underlying inputs after an automatic model update. The sponsor asks why the ratings changed and who approved them.
What is the most likely near-term impact of this governance omission?
Best answer: C
Explanation: The changed outputs reflect a common AI uncertainty source: model updates and non-deterministic generation. When governance does not require traceability (prompt/output logs and model/version capture) and HITL approval, the team cannot explain or evidence decisions. The near-term consequence is loss of auditability, which quickly erodes stakeholder trust and slows approvals.
Topic: Module 7: Lifecycle Governance (Initiate to Operate)
During the Foundation stage, a project plans to use a generative AI tool to draft a benefits map and first-cut requirements. The team identifies AI-specific risks (e.g., hallucinations, confidential data leakage) and records response actions and controls (e.g., HITL review, data handling rules) in the project risk register, assigning an owner to each risk.
In AIPGF terms, which role is typically accountable for ensuring these AI-related risks and planned responses are captured and owned in the project risk register?
Best answer: C
Explanation: Planning AI risk responses in Foundation means integrating AI risks into standard project risk management: documenting them, defining controls, and assigning owners. In AIPGF-aligned governance, this sits with the role accountable for the overall risk register and for embedding agreed controls into the project’s plans. Other roles may approve risk appetite, provide challenge, or approve data use, but they do not own the register end-to-end.
Topic: Module 3: Framework Structure and Controls
You are in the Foundation stage of a customer service transformation project. The team proposes using an AI assistant to draft requirements, process maps, and a benefits forecast for a new triage process.
Constraints:
The draft business case includes predicted “25% cycle-time reduction” and “$1.2m annual savings” but provides no sources and no documented assumptions.
What is the BEST next action?
Best answer: A
Explanation: Before the Business Case sign-off gate, the priority is to prevent weak, AI-generated assertions from being treated as evidence. Using the Business Case Review Checklist highlights missing sources, accountability, and tool/data non-compliance. An Assumptions Log then makes AI-derived assumptions explicit, assigns owners, and defines how and when each will be validated under HITL.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
In AIPGF, what is the term for the artefact that documents how AI assistance will be used, including what project data may be shared with external tools/vendors, the required approvals, and least-privilege access rules?
Best answer: B
Explanation: The AI Assistance Plan is the AIPGF artefact used to set clear, auditable guardrails for AI use in a project. It specifies what AI tools are permitted, what information can be shared externally, who must approve that sharing, and how access is limited using least privilege. This directly supports controlled third-party data access and accountability.
Topic: Module 4: Roles, Responsibilities, and Accountabilities
You are in the Activation stage of a programme using an AI writing assistant to help draft a procurement evaluation report (a formal project deliverable). The team wants to paste shortlisted supplier submissions into the tool to summarise strengths/weaknesses.
Constraints:
AI Assistance Plan and independent assurance sign-off before AI-assisted drafting starts.What is the BEST next action?
Best answer: A
Explanation: Because the Activation gate requires independent assurance sign-off, the next step must be performed by an oversight/assurance role rather than the delivery team. Routing the AI Assistance Plan and HITL approval design to an AI assurance/quality function addresses accountability and stakeholder trust (evidence of checks against hallucinations) while ensuring Restricted data is handled within approved controls.
Topic: Module 5: Principles for Responsible and Trustworthy AI
A programme team has completed the Foundation stage for using a generative AI assistant to draft weekly RAID updates and a first-cut status report. The tool has been selected and the team has agreed on HITL review by the PMO before anything is issued externally.
Exhibit: AI Assistance Plan (excerpt)
Use case: Draft RAID and status report narrative
Inputs: Project schedule extract, risk register export
Outputs: Draft text only (no auto-sending)
HITL: PMO reviews/edits; PM approves release
Gap noted by auditor: No record of prompts/outputs, data versions, or model version used
To apply the AIPGF principle of Transparency before moving into Activation, what is the best next step?
Best answer: B
Explanation: Transparency in AIPGF means stakeholders can understand and auditors can trace how AI-assisted outputs were produced. With a documented gap in records of prompts/outputs and versions, the next step (before Activation) is to put in place traceable, reviewable evidence of AI use. This enables assurance and accountability for what is issued.
Topic: Module 2: AI in Projects and Organizations (Context)
In the APMG AI Project Governance Framework (AIPGF), which core artifact is used to reduce shadow AI use by documenting which AI tools are approved, what project tasks they may support, what data may be shared, and what training/monitoring controls apply?
Best answer: D
Explanation: Shadow AI risk is best controlled by making AI use explicit, approved, and auditable. In AIPGF, the AI Assistance Plan captures the agreed boundaries for AI assistance (approved tools, permitted use cases, data handling constraints) and the enabling controls such as training, monitoring, and human oversight.
Topic: Module 1: Foundations of AI Project Governance (AIPGF)
A project team will use an enterprise-approved generative AI tool to draft stakeholder communications and meeting minutes. The enterprise AI policy already states which tools are allowed and bans uploading confidential data.
The client is sceptical about AI and requires a clear audit trail showing what AI produced, who reviewed it, and who approved release to stakeholders.
Which action is the most appropriate project-level AI governance response under AIPGF?
Best answer: A
Explanation: Because the enterprise policy and tool selection are already set, the remaining gap is project-specific assurance for this use case. An AI Assistance Plan that defines HITL review/approval and keeps an auditable record of prompts/outputs provides the transparency and accountability the client is demanding. This distinguishes AIPGF project-level governance from enterprise AI governance and from general project governance routines.