Try 10 focused AIPGF Foundation questions on Lifecycle Governance (Initiate to Operate), with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | AIPGF Foundation |
| Topic area | Lifecycle Governance (Initiate to Operate) |
| Blueprint weight | 13% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Lifecycle Governance (Initiate to Operate) for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Lifecycle Governance (Initiate to Operate)
You are in Stage 1: Foundation for a client programme to redesign a customer-service process. The team wants to use AI to draft early deliverables (requirements, stakeholder messages, and the initial RAID log).
Constraints:
What is the best next action?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: In Foundation, the priority is to define and evidence how AI will be used safely and accountably before producing AI-assisted deliverables. An AI Assistance Plan, supported by clarification questions and assumptions, captures approved tools, data-handling constraints, and HITL decision rights. This is the clearest artefact set to satisfy the upcoming governance gate and stakeholder trust needs.
Stage 1 (Foundation) is where you make AI use explicit, governable, and auditable before the team relies on AI for project outputs. Given the approved-tool and Confidential–Client constraints, plus the client’s HITL requirement and an imminent governance gate, the next action is to produce the Stage 1 deliverables that define intended AI use and controls and then seek approval.
The AI Assistance Plan should capture, at minimum:
This creates the evidence needed for the gate and prevents uncontrolled AI use.
Stage 1 requires documenting intended AI use, constraints, and HITL controls as evidence for the Foundation gate before using AI on project deliverables.
Topic: Lifecycle Governance (Initiate to Operate)
During the Activation stage of a public-sector programme, the team uses a generative AI assistant to draft weekly status reports and requirements summaries. In Sprint 3, the tool hallucinated a “mandatory” compliance requirement; a business analyst caught it before it reached the Steering Group. The sponsor still wants the speed benefits but says stakeholder trust is at risk if AI issues are hidden.
Which action best balances keeping stakeholders informed about AI benefits and red flags with proportionate governance controls?
Best answer: A
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Activation governance needs ongoing, plain-language transparency about where AI is used, what value it delivers, and what has gone wrong (or nearly went wrong). A lightweight, recurring update plus clear HITL checks and an auditable decision trail keeps stakeholders informed without overloading them or slowing delivery unnecessarily.
Keeping stakeholders informed “throughout delivery” means making AI use visible as an operational practice, not a one-off disclosure. In Activation, the balanced approach is to build a small, regular governance touchpoint (e.g., in sprint reviews/Steering packs) that reports AI-enabled benefits and highlights red flags such as hallucinations, data handling issues, or control breaches, alongside what the team changed in response.
This should be supported by proportionate controls:
This increases trust and enables timely intervention without creating unnecessary administrative drag or exposing sensitive operational details.
It maintains Transparency and trust by routinely sharing benefits and red flags while keeping assurance proportionate through HITL controls and auditable decisions.
Topic: Lifecycle Governance (Initiate to Operate)
You are in the Foundation stage of a programme to redesign a public-sector contact centre. The team wants to use a generative AI tool to draft user stories from past complaint transcripts.
Exhibit: AI Assistance Plan (draft excerpt)
Intended AI use: Summarise complaint transcripts into themes and draft user stories
Tool: Public, web-based GenAI service (no enterprise agreement)
Inputs: 6,000 complaint transcripts with names, phone numbers, and case IDs
Data note: Contract with service provider states “customer data must not be uploaded to third-party services”
Outputs: User stories and acceptance criteria for backlog
Which action is the most appropriate AIPGF control at this point?
Best answer: B
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: The decisive factor is that the intended inputs contain sensitive personal data and the contract explicitly prohibits uploading customer data to third parties. In the Foundation stage, AIPGF expects a proportionate data availability/quality/rights assessment and documented approval decision before any AI processing occurs. If the rights or controls are not available, the AI use must be redesigned (e.g., approved environment or different data).
In AIPGF Foundation, you must establish whether the data you intend to use is available, of sufficient quality for the purpose, and—critically—whether you have the rights and approvals to process it with the chosen AI tool. Here, the inputs contain identifiable personal data and the contract prohibits uploading customer data to third-party services; using a public web-based GenAI tool would breach the stated constraint and undermine confidentiality and trust.
The appropriate control is to:
HITL review improves output quality, but it does not resolve prohibited data transfer or missing data-use rights.
Foundation governance must confirm data rights and confidentiality constraints before using AI with sensitive inputs, and block or re-scope use if rights or controls are not in place.
Topic: Lifecycle Governance (Initiate to Operate)
During Foundation, a programme team wants an AI tool to help draft project deliverables (PID sections, supplier evaluation notes, and stakeholder updates). Drafts will include commercially sensitive information and some personal data from stakeholder registers.
Governance constraints set by the organisation:
Which tool-selection approach should the project NOT use?
Best answer: C
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: Tool selection in AIPGF Foundation must satisfy constraints and enable assurance, not just improve speed. Where deliverables contain sensitive or personal data, governance requires controlled services, audit trails, and clear HITL approvals before release. A consumer chatbot used informally is the clearest anti-pattern because it undermines confidentiality and evidencing of AI-assisted outputs.
In the AIPGF Foundation stage, selecting AI tools is a governance decision: the tool must fit the work while meeting constraints for confidentiality, auditability, and accountable approval. Because AI outputs can be non-deterministic, assurance depends on being able to evidence how outputs were produced (prompt/output capture, retention, access control) and to apply Human-in-the-Loop review before anything is issued externally.
A public consumer chatbot used without organisational controls is typically an uncontrolled third-party service: it can create data leakage and makes it hard to retain reliable records for assurance. By contrast, contracted enterprise services, self-hosted deployments, and workflows that include redaction plus an AI Assistance Plan and explicit human approvals are all compatible with the stated constraints.
This breaches the constraint against sending sensitive data to uncontrolled third parties and provides weak auditability for assurance.
Topic: Lifecycle Governance (Initiate to Operate)
A project used a generative AI assistant to draft meeting minutes and first-pass requirements. In the Evaluation stage, the team found:
The issues have been logged and validated. To drive improvement for future projects, what is the BEST next step?
Best answer: B
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: In AIPGF Stage 3 (Evaluation), the goal is to turn evidence from outcomes and incidents into concrete improvement actions. Here, the findings point to better prompts, better training, and stronger, more consistent HITL and data-handling controls. The best next step is to update the reusable governance assets so the next project’s Foundation stage starts with improved guidance and guardrails.
Stage 3 (Evaluation) is where you capture what happened, why it happened, and what to change so future AI-assisted work is safer and more reliable. Because the issues are already logged and validated, the next step is to convert them into specific, reusable improvements that will be adopted at the start of the next lifecycle.
Typical Evaluation-to-Foundation improvements include:
AI Assistance Plan template (scope, decision rights, HITL checkpoints)Expanding rollout or closing without embedding these improvements either skips prerequisites or fails to realise organisational learning.
Evaluation learning should be converted into reusable governance assets (prompt guidance, training, and controls) for future Foundation planning.
Topic: Lifecycle Governance (Initiate to Operate)
A programme team uses a generative AI assistant to draft a business case and stakeholder updates. During assurance, no one can show which tool version was used, prompts are not recorded, and there is no evidence of HITL review before content was sent externally. The sponsor pauses approvals due to stakeholder distrust, and the expected time-saving benefits are not being realised.
Which is the most likely underlying cause?
Exhibit: Stage 1 records found
AI Assistance Plan: Not created
Clarification questions log: None
Assumptions log: None
Decision log (AI use): None
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: The symptoms point to a governance setup failure in Stage 1 (Foundation), not a one-off execution problem. If the AI Assistance Plan and supporting clarification/assumptions/decision logs are missing, there is no agreed basis for approved tools, prompt/usage recording, or HITL approval evidence—driving distrust, stalled sign-off, and benefit shortfalls.
In AIPGF Stage 1 (Foundation), AI-assisted work needs explicit, agreed governance before AI is used to create project deliverables. An AI Assistance Plan and supporting logs (clarification questions, assumptions, and key decisions about AI use) provide the baseline for: which tools may be used, what data may be entered, what must be recorded (prompts, versions, outputs), and where HITL review/sign-off is required. In the scenario, every breakdown (unapproved tool use, undocumented prompts, missing HITL evidence, and eroding stakeholder trust) is consistent with those Stage 1 artifacts not being created and approved, leaving the team to improvise controls and making assurance impossible. The root cause is missing Foundation deliverables, not simply poor AI output quality or stakeholder behaviour.
Without an AI Assistance Plan and supporting logs, tool choice, HITL sign-off, and traceability controls remain ad hoc and un-auditable.
Topic: Lifecycle Governance (Initiate to Operate)
You are initiating an AI-assisted reporting capability for a programme office. The AI will draft weekly status updates and highlight emerging risks by summarising:
A pilot is due in 2 weeks. There is no approved data-processing agreement for external AI tools. An internal AI environment is available but needs a curated knowledge base.
Which approach best balances effective AI use with required governance over data availability, quality, and rights during the Foundation stage?
Best answer: A
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: In Foundation, governance must confirm what data is available, fit for purpose, and lawful/contractually permitted for the intended AI use. The best-balanced approach classifies and inventories data, validates rights and sensitivity constraints, and then curates/redacts inputs so the pilot can run safely in an approved internal environment with Human-in-the-Loop review.
This scenario is primarily a data rights and suitability problem, not a model-quality problem. In the Foundation stage you should establish whether each intended source can be used for the stated AI purpose (rights, confidentiality, privacy), and whether the data quality is sufficient (completeness, consistency, timeliness) before enabling access.
A proportionate, effective approach is to:
This achieves a usable pilot within constraints while maintaining confidentiality and auditability; simply moving faster by uploading sensitive data externally undermines required governance controls.
It verifies data rights and sensitivity, improves input quality via curation/redaction, and enables timely value using an internal environment with appropriate controls.
Topic: Lifecycle Governance (Initiate to Operate)
During Activation, a project team uses an AI assistant to draft the monthly portfolio status report. The report has already been sent to the steering committee, who use it to approve funding releases. A reviewer then finds two problems: the AI included a table with supplier rate-card figures marked “confidential,” and it misreported a critical milestone as complete.
Which action is MOST appropriate to take next under AIPGF?
Best answer: D
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: This is an AI-related incident involving sensitive information exposure and a high-impact decision artifact, so the priority is containment and formal escalation. Withdrawing the report and suspending the AI assistant reduces further harm while the team records evidence, coordinates communication, and produces a corrected, human-approved version. This aligns with AIPGF’s need for clear accountability, auditability, and HITL control over decision-support outputs.
In AIPGF Activation, when an AI-assisted deliverable causes harm (e.g., confidentiality breach and incorrect status used for funding decisions), treat it as an incident, not just a quality defect. The first move is containment: stop further distribution and prevent repeat generation (pause the tool/access or switch to a manual path). Next, ensure auditability by recording what happened (inputs/outputs, timestamps, affected stakeholders), then escalate to the accountable governance roles (e.g., project manager/sponsor plus InfoSec or commercial confidentiality owner) so communication and corrective actions are authorized. Finally, correct and reissue the report under HITL and appropriate approvals, and update relevant controls (AI Assistance Plan, risk/issue logs) before resuming use. The key takeaway is “contain, escalate, correct, then restore” with evidence and decision rights preserved.
This contains the breach, preserves auditability, triggers escalation, and enables rollback/correction with accountable approval.
Topic: Lifecycle Governance (Initiate to Operate)
A programme has used a generative AI assistant throughout delivery to draft status reports, summarise workshops, and propose risk responses. The final deliverables have been accepted, and the sponsor asks the programme manager to “close the programme this week.”
You are starting AIPGF Stage 3 (Evaluation) activities and realise the evaluation approach was not clearly documented during earlier stages.
Which clarifying question should you ask first to ensure Stage 3 (Evaluation) is properly completed and aligned to project closure?
Best answer: B
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: AIPGF Stage 3 (Evaluation) is about confirming whether the initiative achieved its intended outcomes, that AI-assisted deliverables were acceptable against agreed criteria, and capturing lessons for future use. In closure terms, it maps to acceptance/sign-off evidence, benefits/controls confirmation, and retrospective/knowledge capture before handing over or shutting down AI assistance.
Stage 3 (Evaluation) provides assurance at the end of the AI-assisted work: did the project/programme achieve the intended outcomes, were AI-assisted outputs fit for purpose, and what should be learned and carried forward. This aligns directly to closure activities such as confirming acceptance criteria, compiling closure evidence (including any AI-specific assurance), updating benefits tracking/risks for post-closure ownership, and capturing lessons learned.
If the evaluation approach was not clearly set earlier, the first step is to clarify the success criteria and what evidence is required for sign-off, explicitly covering AI-assisted outputs and how learning will be recorded. Without that, “closing this week” risks closing without demonstrating outcomes or capturing learning, undermining transparency and future governance.
Questions about future tools, broader data access, or skipping learning activities do not establish whether closure is justified.
Stage 3 (Evaluation) confirms outcomes against agreed criteria and captures learning before formal closure and handover.
Topic: Lifecycle Governance (Initiate to Operate)
A programme used an AI assistant to recommend which change requests (CRs) to prioritise each month. At Stage 3 (Evaluation), the closure report notes concerns that CRs from smaller regions were consistently ranked lower.
Exhibit: Evaluation evidence (excerpt)
Inputs used by AI: CR text, cost estimate, benefit estimate, requester region
Output: Priority score 1–100 with short rationale
Controls used: Project manager approval of monthly priority list
Issue raised: “Region appears to influence ranking; rationale is generic”
Which action best balances ethical, efficient, and effective AI use with required governance controls?
Best answer: C
What this tests: Lifecycle Governance (Initiate to Operate)
Explanation: In Evaluation, the right optimisation is to keep benefits of AI assistance while closing assurance gaps. A targeted bias and traceability review tests whether the decision process is fair, and improves transparency by strengthening the audit trail and explanations. Updating the AI Assistance Plan and HITL rules makes accountability explicit without unnecessarily discarding the tool.
In AIPGF Evaluation, you confirm whether AI-assisted decisions were trustworthy and learn what governance needs tightening before reuse or scale-out. Here, the evidence suggests a potential fairness issue (region influencing outcomes) and weak transparency (generic rationales), so the evaluation should add assurance rather than simply accept results.
A balanced response is to:
This preserves efficiency while strengthening Human-Centricity, Transparency, and Accountability.
It adds proportionate fairness testing and auditability (decision trace) while keeping human approval and improving transparency for future use.
Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.