APMG AIPGF Foundation Practice Test

Practice APMG AIPGF Foundation with free sample questions, timed mock exams, and detailed explanations for governance, delivery, and change decisions.

AIPGF Foundation (AIPGF-F) is APMG International’s foundation-level certification for AI Project Governance Framework knowledge. If you are searching for AIPGF Foundation sample exam questions, practice tests, or an exam simulator, this is the main PM Mastery page to start on web, then continue on iOS or Android with the same account.

Interactive Practice Center

Start a practice session for APMG AI Project Governance Framework (AIPGF) Foundation below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.

What this AIPGF Foundation practice page gives you

  • A fast path into the PM Mastery web simulator for AIPGF Foundation.
  • Short topic drills, mixed sets, and timed mock-style practice.
  • Detailed explanations that show why the best governance answer is right.
  • A clear free-preview path before you subscribe.
  • The same account across web and mobile.

AIPGF Foundation exam snapshot

  • Vendor: APMG International
  • Official exam name: APMG AI Project Governance Framework (AIPGF) Foundation
  • Exam code: AIPGF-F
  • Questions: 40
  • Time limit: 40 minutes
  • Recommended pace: about 1 minute per question

Because the real exam is short, pacing matters. The fastest gains usually come from removing hesitation on governance roles, lifecycle controls, and responsible-AI principles.

Topic coverage for AIPGF Foundation practice

TopicWeightEstimated questions
Module 1: Foundations of AI Project Governance (AIPGF)12%5
Module 2: AI in Projects and Organizations (Context)12%5
Module 3: Framework Structure and Controls13%5
Module 4: Roles, Responsibilities, and Accountabilities13%5
Module 5: Principles for Responsible and Trustworthy AI13%5
Module 6: Values, Behaviours, and Culture12%5
Module 7: Lifecycle Governance (Initiate to Operate)13%5
Module 8: Assurance, Metrics, and Continuous Improvement12%5

How to use the AIPGF Foundation simulator efficiently

  1. Start with one module and run a short drill right away.
  2. Review every miss until you can explain the control, role, or governance principle behind the right answer.
  3. Move into mixed sets after two or three solid topic sessions.
  4. Finish with full 40-question timed runs to rehearse pace and decision discipline.

Free preview vs premium

  • Free preview: a smaller web set so you can confirm the app style and question quality.
  • Premium: the full AIPGF Foundation practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .

24 AIPGF Foundation sample questions with detailed explanations

These sample questions cover multiple blueprint areas for AIPGF Foundation. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.

Question 1

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A project uses a vendor-provided generative AI tool to draft status reports and risk summaries. Recently, several AI-written deliverables were issued without HITL sign-off, prompts are not being recorded, and the tool was enabled by the vendor for a pilot without going through project approvals. When stakeholders challenged a hallucinated risk statement, the project team escalated to Procurement, Procurement escalated to IT, and IT said the vendor “owns the tool,” but no one could name an accountable approver or a vendor escalation contact. Trust has dropped and expected benefits are not being realised.

What is the MOST likely underlying cause of this AI governance breakdown?

  • A. The project team did not maintain a prompt and output decision log
  • B. Third-party responsibilities and escalation paths were not documented in the governance structure
  • C. Stakeholders need additional communication to accept AI-assisted reporting
  • D. HITL sign-off was not enforced before releasing AI-assisted deliverables

Best answer: B

Explanation: The key diagnostic clue is the escalation “ping-pong” and lack of any named accountable approver or vendor contact. That points to a structural gap: third-party responsibilities, decision rights, and escalation paths were never defined and embedded in project governance. When these are undocumented, other controls (like HITL sign-off and prompt logging) are unlikely to be consistently implemented or enforced.


Question 2

Topic: Module 6: Values, Behaviours, and Culture

You are in the Foundation stage of a programme using an AI assistant to draft stakeholder communications and requirements summaries.

Constraints:

  • Only the organisation-approved AI tool may be used.
  • Source documents are Client Confidential and must not be pasted into external systems.
  • Outputs must remain HITL (human review and sign-off).
  • Passing the Activation governance gate requires an agreed AI Assistance Plan and evidence of stakeholder buy-in.

Several business analysts are resisting the AI assistant, citing “black box answers,” fear of being judged on AI-written work, and concern about accidental data leakage. What is the BEST next action?

  • A. Run an adoption workshop; document concerns; pilot with redacted data
  • B. Paste confidential documents into the tool to prove time savings
  • C. Proceed to Activation and address resistance after first deliverable
  • D. Mandate immediate use and measure compliance in performance reviews

Best answer: A

Explanation: In Foundation, the priority is to understand and address resistance before scaling use. A facilitated session to elicit concerns (trust/opacity, fear of evaluation or job impact, and confidentiality worries) supports Human-Centricity and Transparency. Converting those concerns into a controlled HITL pilot and updates to the AI Assistance Plan creates the buy-in evidence needed for the Activation gate.


Question 3

Topic: Module 6: Values, Behaviours, and Culture

A project team wants to paste a full set of customer support transcripts into an external generative AI tool “to see what insights we can discover.” The transcripts include personal data and commercially sensitive information. The project manager stops the experiment, requires redaction/minimisation, and only allows testing using an approved environment documented in the AI Assistance Plan.

Which AIPGF Core Value is primarily being applied to resolve this curiosity-versus-risk conflict?

  • A. HITL responsibility (humans review and approve AI outputs)
  • B. Transparency principle (make AI use visible and explainable)
  • C. Adaptability principle (change governance as learning increases)
  • D. Protect privacy and confidentiality of people and data

Best answer: D

Explanation: The team is balancing curiosity about AI insights against the risk of exposing personal and sensitive information. In AIPGF terms, the primary driver for stopping and redesigning the experiment is the core value of protecting privacy/confidentiality through minimisation, controlled environments, and documented approval. This resolves the value conflict responsibly while still enabling learning.


Question 4

Topic: Module 6: Values, Behaviours, and Culture

During the Activation stage, a programme team uses a generative AI tool to summarise recorded customer interviews and draft requirements.

In a governance discussion, a business representative raises an ethical concern: the recordings may contain sensitive data and customers did not explicitly consent to AI processing. The sponsor verbally agrees to continue “to keep pace,” but the decision is not documented, and the AI Assistance Plan and decision log are not updated.

What is the most likely near-term impact of this omission?

  • A. The programme becomes locked into one AI vendor, reducing adaptability
  • B. Assurance cannot evidence due diligence and requests a pause to document approvals
  • C. A regulator issues a financial penalty once the system is in operation
  • D. Benefit realisation fails because the AI summaries are not accurate enough

Best answer: B

Explanation: Failing to document the ethical discussion and decision undermines Transparency and weakens auditability. The most immediate consequence is typically an assurance or governance challenge: the team cannot show who approved what, on what basis, and with what controls (HITL), so work may be paused until this is corrected.


Question 5

Topic: Module 3: Framework Structure and Controls

In the APMG AI Project Governance Framework (AIPGF), what best describes the purpose of the Foundation stage?

  • A. Continuously monitor outcomes and improve controls based on evidence
  • B. Execute AI-assisted work and manage HITL approvals for outputs
  • C. Develop and train the AI model that will be used by the project
  • D. Establish AI governance, assurance, and readiness before AI use

Best answer: D

Explanation: AIPGF supports ethical, efficient, and effective AI assistance by structuring governance across its life cycle stages. The Foundation stage is where the project sets up the governance and assurance baseline—clarifying how AI may be used, who approves AI-assisted outputs, and what controls and evidence are required—before AI use is scaled.


Question 6

Topic: Module 4: Roles, Responsibilities, and Accountabilities

During Activation, a project team uses a third-party generative AI web tool to summarise stakeholder interview transcripts that include personal data. Organizational policy requires Procurement and the Data Protection Officer to approve any third-party AI tool before use, and to retain evidence (contract/DPA and usage logs) for audit.

An assurance reviewer asks for the vendor approval record, DPA, and prompt/output log extracts. The team has none.

What is the most likely near-term impact of this governance omission?

  • A. A future operational incident will reveal hallucinated requirements decisions
  • B. Immediate auditability gap, likely forcing tool stoppage and rework
  • C. Benefits realisation will drop at closure due to subscription costs
  • D. Model drift will gradually reduce forecast accuracy over several months

Best answer: B

Explanation: Using an unapproved third-party AI tool for personal data creates an immediate policy-compliance and evidence problem. When assurance requests the approval record, DPA, and logs, the team cannot demonstrate compliant processing or reconstruct what data was shared and how outputs were produced. The near-term consequence is reduced auditability and increased risk exposure, often leading to an immediate stop-and-remediate action.


Question 7

Topic: Module 5: Principles for Responsible and Trustworthy AI

A programme team is short of business analysts. The project manager proposes using a generative AI tool to draft user stories, acceptance criteria, and change-impact summaries. The sponsor suggests “let the AI do the analysis” and redeploy the analysts elsewhere. The tool is already on the organisation’s approved list and an initial screening rated this use as Medium risk.

To apply the Human-Centricity principle and ensure AI assistance augments capability rather than replacing necessary expertise, what should the project manager verify/ask FIRST?

  • A. Whether the vendor contract grants the organisation rights to reuse prompts and outputs
  • B. What quantitative success criteria will be used to measure time saved by the AI tool
  • C. What residual risk level remains after mitigations are applied to the AI use
  • D. Who is accountable to review and approve the AI drafts, and what SME competence is required for HITL checks

Best answer: D

Explanation: Human-Centricity in AI-assisted project work means humans remain accountable and competent for decisions, with AI used to support—not substitute—professional judgement. The immediate governance gap in the scenario is the implied removal of analyst expertise, so the first clarification is who will perform HITL review and approval, and whether they have the right skills to validate the drafts.


Question 8

Topic: Module 5: Principles for Responsible and Trustworthy AI

A programme team uses a generative AI assistant to draft risk responses and safety recommendations for a public-facing infrastructure project. Before anything is added to the risk register or shared externally, the sponsor requires a qualified engineer and the risk manager to review the draft, correct errors or bias, and explicitly approve the final text (HITL).

Which AIPGF principle is this practice primarily demonstrating?

  • A. Human-Centricity
  • B. Adaptability
  • C. Evaluation stage intent
  • D. Transparency

Best answer: A

Explanation: This practice is about maintaining human oversight and decision-making authority over AI-assisted outputs. By requiring competent reviewers to validate, correct, and approve recommendations, the team reduces the risk of hallucinations, bias, and unsafe guidance being accepted as-is. That is the essence of the Human-Centricity principle in AIPGF.


Question 9

Topic: Module 5: Principles for Responsible and Trustworthy AI

A project team used a generative AI assistant to draft parts of a customer-facing “as-is/to-be” process pack. Two months later, a major defect is found and the sponsor asks for a post-incident investigation: what the AI produced, what sources it used, and who approved the final content. The team cannot immediately show what AI assistance was used.

Which question/request should you make FIRST to apply the Transparency principle and enable an effective audit?

  • A. Can we disable the AI tool for the rest of delivery?
  • B. What accuracy percentage did the AI vendor claim?
  • C. Provide the AI Assistance Plan and AI-use audit trail
  • D. Should we retrain or fine-tune the model immediately?

Best answer: C

Explanation: Transparency in AIPGF is about making AI assistance explainable and auditable through traceability. Before debating fixes or tool changes, you need the records that reconstruct what happened: the agreed scope of AI use and the evidence trail of prompts/inputs, outputs, versions, and human approvals. That enables audits, reviews, and post-incident investigations based on facts rather than assumptions.


Question 10

Topic: Module 8: Assurance, Metrics, and Continuous Improvement

A programme is using a generative AI assistant to draft weekly project status reports. The AI Assistance Plan (created in Foundation and operationalised in Activation) requires a monthly review of AI usage.

The first periodic review finds:

  • HITL approval is missing on several reports
  • Some prompts include client identifiers
  • The decision log does not reference AI-assisted content

What is the best next step in the AIPGF flow?

  • A. Expand AI use to more teams before doing any remediation
  • B. Approve continued use because no incidents have been reported
  • C. Defer action until the end-of-programme Evaluation stage
  • D. Log findings, assign corrective actions, update AI Assistance Plan, and re-audit

Best answer: D

Explanation: A periodic audit only improves assurance if findings are converted into owned actions and verified closure. Here, gaps in HITL, privacy handling, and auditability require updating the AI Assistance Plan and related controls, then scheduling a follow-up review to confirm the fixes work in practice. This aligns with continuous improvement at scale in the Evaluation loop.


Question 11

Topic: Module 3: Framework Structure and Controls

During the Evaluation stage of an AI-assisted programme, the assurance reviewer tests evidence for an AI tool used to draft stakeholder communications. They find the team cannot show an approved AI Assistance Plan or HITL review records for externally shared outputs.

The reviewer emails the Steering Committee the following note (excerpt):

Finding: Control weakness in AI governance
Details: Missing artefacts and insufficient traceability
Recommendation: Improve assurance posture and model controls
Appendix: 6 pages of technical observations

No required actions, owners, or decision request are stated. What is the most likely near-term impact?

  • A. The programme fails benefits realisation due to low adoption
  • B. Steering delays decisions because actions and owners are unclear
  • C. The AI tool’s outputs become biased as data drifts
  • D. External auditors issue a formal adverse opinion next year

Best answer: B

Explanation: Assurance findings need to be communicated as clear, decision-ready project language: what is wrong, the risk, the required action, who owns it, and by when. Here, the message is technical and non-actionable, so the Steering Committee cannot readily approve next steps or hold anyone accountable. The immediate consequence is decision friction and continued exposure from unresolved evidence gaps.


Question 12

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A vendor is contracted to produce an AI-assisted benefits map and draft business case for your programme using their own generative AI tooling. Internal assurance has stated that, during Evaluation, they must be able to trace what AI was used, what inputs were provided, who reviewed the outputs, and what was approved.

Which action should you AVOID to maintain auditability and traceability of the vendor’s AI-assisted deliverables?

  • A. Require an evidence pack covering prompts/inputs, tool/model version, output history, and human review notes
  • B. Include contract terms for right-to-audit and retention of AI-related records in the project repository
  • C. Accept only the final documents and allow the vendor to discard AI interaction records as proprietary
  • D. Update the AI Assistance Plan and name an accountable approver for vendor AI-assisted outputs

Best answer: C

Explanation: Auditability with third parties requires a verifiable trail from inputs and AI use through human review to approvals, retained in accessible records. If the vendor keeps only final outputs and discards the AI interaction history, assurance cannot test provenance, decision points, or accountability. This is a governance anti-pattern in AI-assisted delivery.


Question 13

Topic: Module 7: Lifecycle Governance (Initiate to Operate)

A project has used an AI assistant to draft requirements, status reports, and risk narratives. The solution is now being handed over to operations, and the sponsor has asked for evidence that AI-assisted outputs were appropriately governed and that benefits and issues are captured before the project is closed.

Which action is most aligned with the intent of AIPGF Stage 3 (Evaluation) and how it maps to project closure phases?

  • A. Expand team automation with new AI agent workflows
  • B. Create the initial AI Assistance Plan and approvals
  • C. Run a post-implementation evaluation and close-out review
  • D. Ask the AI tool vendor to sign off all deliverables

Best answer: C

Explanation: AIPGF Stage 3 (Evaluation) is about closing the loop: verifying that AI-assisted work delivered intended benefits, stayed within agreed controls, and generating auditable learning for future use. In project closure terms, it maps to post-implementation review/close-out activities, including evidence capture, lessons learned, and a clear handover of ongoing monitoring and ownership to operations.


Question 14

Topic: Module 4: Roles, Responsibilities, and Accountabilities

You are procuring an external AI-enabled documentation service to help draft project deliverables. Before onboarding, you review the AI Assistance Plan excerpt below.

Exhibit: AI Assistance Plan (excerpt)

AI use: Draft requirements & stakeholder comms
External party: DocuGen Services (vendor)
Requested access: SharePoint /Projects (read/write)
Data types: Contracts, bids, customer feedback
Data classification: Mixed (Confidential + Personal)
Approver listed: Project Manager
Access duration: Not specified

What is the best next action to align with least privilege and clear approvals?

  • A. Allow full SharePoint access but require the vendor to sign an NDA
  • B. Grant access now and capture approvals in a decision log after the first deliverable
  • C. Define a minimum dataset and get data-owner/security approvals before granting time-bound, read-only access
  • D. Approve access as requested to avoid delays

Best answer: C

Explanation: The exhibit shows a broad read/write request to a mixed-classification repository, with only the project manager listed as approver and no access duration. Least privilege requires scoping data and permissions to the minimum necessary and time-boxing access. Clear approvals require the appropriate data owner and security/privacy authorities to approve the sharing arrangement before access is granted.


Question 15

Topic: Module 4: Roles, Responsibilities, and Accountabilities

A supplier is contracted to draft a set of user requirements and a cost estimate using generative AI. The programme assurance lead will only accept the deliverables if they are auditable later (who used what tool, what inputs were provided, what outputs were accepted, and when).

Which artifact/evidence best validates that auditability and traceability controls are ready for this third-party contribution?

  • A. Signed AI Assistance Plan addendum requiring prompt/output logs and retention
  • B. RACI showing supplier Responsible and sponsor Accountable for deliverables
  • C. Risk register entry noting AI hallucination risk with mitigation “peer review”
  • D. Benefits tracker showing hours saved using the supplier’s AI

Best answer: A

Explanation: For third-party AI-assisted deliverables, auditability depends on agreed, verifiable records of how outputs were produced and approved. A signed AI Assistance Plan addendum can mandate prompt/input capture, versioning, decision/approval evidence, and retention/access rules that enable later audit and traceability. This provides readiness evidence before deliverables are accepted.


Question 16

Topic: Module 3: Framework Structure and Controls

In the APMG AI Project Governance Framework (AIPGF), what does a Human-in-the-Loop (HITL) control mean when governing AI-assisted outputs?

  • A. Automated acceptance of AI outputs once the tool meets a target accuracy level
  • B. A retrospective audit activity performed only after a problem is reported
  • C. A technical method for retraining the AI model using new project data
  • D. Defined review and approval checkpoints where accountable humans validate AI outputs before use

Best answer: D

Explanation: HITL in AIPGF is about purposeful human oversight of AI-assisted work products. It establishes checkpoints and assigns accountable reviewers/approvers who decide whether an AI-assisted output is acceptable for its intended use. This includes clarifying who can approve and when escalation is required.


Question 17

Topic: Module 2: AI in Projects and Organizations (Context)

A programme team uses a generative AI tool to draft weekly status reports and update the RAID log. In Activation, they skip the AI Assistance Plan items for: recording prompts/outputs, capturing model version, and assigning a Human-in-the-Loop approver.

Two weeks later, the tool produces different wording and risk ratings for the same underlying inputs after an automatic model update. The sponsor asks why the ratings changed and who approved them.

What is the most likely near-term impact of this governance omission?

  • A. Immediate cybersecurity breach caused by the model update
  • B. A gradual improvement in forecast accuracy as the model learns
  • C. Reduced auditability, triggering immediate stakeholder trust and approval delays
  • D. A long-term regulatory penalty for using AI in reporting

Best answer: C

Explanation: The changed outputs reflect a common AI uncertainty source: model updates and non-deterministic generation. When governance does not require traceability (prompt/output logs and model/version capture) and HITL approval, the team cannot explain or evidence decisions. The near-term consequence is loss of auditability, which quickly erodes stakeholder trust and slows approvals.


Question 18

Topic: Module 7: Lifecycle Governance (Initiate to Operate)

During the Foundation stage, a project plans to use a generative AI tool to draft a benefits map and first-cut requirements. The team identifies AI-specific risks (e.g., hallucinations, confidential data leakage) and records response actions and controls (e.g., HITL review, data handling rules) in the project risk register, assigning an owner to each risk.

In AIPGF terms, which role is typically accountable for ensuring these AI-related risks and planned responses are captured and owned in the project risk register?

  • A. Project Sponsor
  • B. Data owner (business owner of the datasets)
  • C. Project/Programme Manager
  • D. Independent assurance/audit function

Best answer: C

Explanation: Planning AI risk responses in Foundation means integrating AI risks into standard project risk management: documenting them, defining controls, and assigning owners. In AIPGF-aligned governance, this sits with the role accountable for the overall risk register and for embedding agreed controls into the project’s plans. Other roles may approve risk appetite, provide challenge, or approve data use, but they do not own the register end-to-end.


Question 19

Topic: Module 3: Framework Structure and Controls

You are in the Foundation stage of a customer service transformation project. The team proposes using an AI assistant to draft requirements, process maps, and a benefits forecast for a new triage process.

Constraints:

  • HITL is mandatory: a named business owner must approve any AI-assisted claims used for governance decisions.
  • Only the organisation’s approved internal AI tool may be used; the proposal currently cites benchmarks generated in an unapproved public tool.
  • Source data for the benchmarks is classified Confidential and must not be shared externally.
  • The next governance gate is Business Case sign-off by the AI Governance Board; stakeholder trust is low after a prior AI overpromise.

The draft business case includes predicted “25% cycle-time reduction” and “$1.2m annual savings” but provides no sources and no documented assumptions.

What is the BEST next action?

  • A. Apply the Business Case Review Checklist and create an Assumptions Log for AI-derived claims
  • B. Ask the AI assistant to regenerate the benefits model with more detailed calculations
  • C. Submit the business case and validate assumptions during Activation using sprint outcomes
  • D. Run a quick proof-of-concept using Confidential data in the public tool to improve accuracy

Best answer: A

Explanation: Before the Business Case sign-off gate, the priority is to prevent weak, AI-generated assertions from being treated as evidence. Using the Business Case Review Checklist highlights missing sources, accountability, and tool/data non-compliance. An Assumptions Log then makes AI-derived assumptions explicit, assigns owners, and defines how and when each will be validated under HITL.


Question 20

Topic: Module 4: Roles, Responsibilities, and Accountabilities

In AIPGF, what is the term for the artefact that documents how AI assistance will be used, including what project data may be shared with external tools/vendors, the required approvals, and least-privilege access rules?

  • A. Risk register
  • B. AI Assistance Plan
  • C. Decision log
  • D. RACI matrix

Best answer: B

Explanation: The AI Assistance Plan is the AIPGF artefact used to set clear, auditable guardrails for AI use in a project. It specifies what AI tools are permitted, what information can be shared externally, who must approve that sharing, and how access is limited using least privilege. This directly supports controlled third-party data access and accountability.


Question 21

Topic: Module 4: Roles, Responsibilities, and Accountabilities

You are in the Activation stage of a programme using an AI writing assistant to help draft a procurement evaluation report (a formal project deliverable). The team wants to paste shortlisted supplier submissions into the tool to summarise strengths/weaknesses.

Constraints:

  • Data is classified Restricted and must stay within approved tools.
  • The programme’s HITL policy requires a named accountable approver for AI-assisted content in formal deliverables.
  • The Activation governance gate requires an approved AI Assistance Plan and independent assurance sign-off before AI-assisted drafting starts.
  • The sponsor is concerned about stakeholder trust if the report contains AI hallucinations.

What is the BEST next action?

  • A. Ask the AI Assurance/Quality role to review and sign off the AI Assistance Plan and HITL approval steps before any drafting starts.
  • B. Have the prompt engineer document the approach and approve the AI Assistance Plan to keep delivery moving.
  • C. Have the project manager approve the AI Assistance Plan and begin drafting, logging decisions as you go.
  • D. Proceed using a public AI tool but remove supplier names to reduce sensitivity.

Best answer: A

Explanation: Because the Activation gate requires independent assurance sign-off, the next step must be performed by an oversight/assurance role rather than the delivery team. Routing the AI Assistance Plan and HITL approval design to an AI assurance/quality function addresses accountability and stakeholder trust (evidence of checks against hallucinations) while ensuring Restricted data is handled within approved controls.


Question 22

Topic: Module 5: Principles for Responsible and Trustworthy AI

A programme team has completed the Foundation stage for using a generative AI assistant to draft weekly RAID updates and a first-cut status report. The tool has been selected and the team has agreed on HITL review by the PMO before anything is issued externally.

Exhibit: AI Assistance Plan (excerpt)

Use case: Draft RAID and status report narrative
Inputs: Project schedule extract, risk register export
Outputs: Draft text only (no auto-sending)
HITL: PMO reviews/edits; PM approves release
Gap noted by auditor: No record of prompts/outputs, data versions, or model version used

To apply the AIPGF principle of Transparency before moving into Activation, what is the best next step?

  • A. Start the rollout and rely on PMO review only
  • B. Define and implement an audit trail for AI use
  • C. Seek sponsor approval now; document traceability later
  • D. Wait until Evaluation to capture evidence and lessons learned

Best answer: B

Explanation: Transparency in AIPGF means stakeholders can understand and auditors can trace how AI-assisted outputs were produced. With a documented gap in records of prompts/outputs and versions, the next step (before Activation) is to put in place traceable, reviewable evidence of AI use. This enables assurance and accountability for what is issued.


Question 23

Topic: Module 2: AI in Projects and Organizations (Context)

In the APMG AI Project Governance Framework (AIPGF), which core artifact is used to reduce shadow AI use by documenting which AI tools are approved, what project tasks they may support, what data may be shared, and what training/monitoring controls apply?

  • A. Decision log
  • B. RACI matrix
  • C. Project risk register
  • D. AI Assistance Plan

Best answer: D

Explanation: Shadow AI risk is best controlled by making AI use explicit, approved, and auditable. In AIPGF, the AI Assistance Plan captures the agreed boundaries for AI assistance (approved tools, permitted use cases, data handling constraints) and the enabling controls such as training, monitoring, and human oversight.


Question 24

Topic: Module 1: Foundations of AI Project Governance (AIPGF)

A project team will use an enterprise-approved generative AI tool to draft stakeholder communications and meeting minutes. The enterprise AI policy already states which tools are allowed and bans uploading confidential data.

The client is sceptical about AI and requires a clear audit trail showing what AI produced, who reviewed it, and who approved release to stakeholders.

Which action is the most appropriate project-level AI governance response under AIPGF?

  • A. Create an AI Assistance Plan with HITL approvals and output logging
  • B. Escalate to the enterprise AI board to change AI policy
  • C. Treat AI use as a normal change request via project change control
  • D. Implement enterprise-wide model monitoring and a central model registry

Best answer: A

Explanation: Because the enterprise policy and tool selection are already set, the remaining gap is project-specific assurance for this use case. An AI Assistance Plan that defines HITL review/approval and keeps an auditable record of prompts/outputs provides the transparency and accountability the client is demanding. This distinguishes AIPGF project-level governance from enterprise AI governance and from general project governance routines.

Revised on Sunday, April 26, 2026