AIPGF Foundation: Lifecycle Governance (Initiate to Operate)

Try 10 focused AIPGF Foundation questions on Lifecycle Governance (Initiate to Operate), with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Foundation
Topic areaLifecycle Governance (Initiate to Operate)
Blueprint weight13%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Lifecycle Governance (Initiate to Operate) for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Lifecycle Governance (Initiate to Operate)

You are in Stage 1: Foundation for a client programme to redesign a customer-service process. The team wants to use AI to draft early deliverables (requirements, stakeholder messages, and the initial RAID log).

Constraints:

  • Only the organisation’s approved internal AI sandbox may be used (no public tools).
  • Source content is classified Confidential–Client and must not leave the tenant.
  • The client requires HITL: a named human owner must approve any AI-assisted content before it is shared.
  • A Foundation governance gate next week requires evidence of planned AI use and controls to build stakeholder trust.

What is the best next action?

  • A. Ask SMEs to review AI outputs, without documenting AI use cases or assumptions
  • B. Start drafting deliverables in a public generative AI tool using redacted text
  • C. Proceed to Activation with a limited pilot, then document controls if issues arise
  • D. Create an AI Assistance Plan and supporting clarification/assumptions logs for approval

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: In Foundation, the priority is to define and evidence how AI will be used safely and accountably before producing AI-assisted deliverables. An AI Assistance Plan, supported by clarification questions and assumptions, captures approved tools, data-handling constraints, and HITL decision rights. This is the clearest artefact set to satisfy the upcoming governance gate and stakeholder trust needs.

Stage 1 (Foundation) is where you make AI use explicit, governable, and auditable before the team relies on AI for project outputs. Given the approved-tool and Confidential–Client constraints, plus the client’s HITL requirement and an imminent governance gate, the next action is to produce the Stage 1 deliverables that define intended AI use and controls and then seek approval.

The AI Assistance Plan should capture, at minimum:

  • intended AI use cases and excluded uses
  • approved tool(s) and prohibited tools
  • data classification/handling rules and prompt/content boundaries
  • HITL roles: who reviews, who approves, and what gets logged

This creates the evidence needed for the gate and prevents uncontrolled AI use.

Stage 1 requires documenting intended AI use, constraints, and HITL controls as evidence for the Foundation gate before using AI on project deliverables.


Question 2

Topic: Lifecycle Governance (Initiate to Operate)

During the Activation stage of a public-sector programme, the team uses a generative AI assistant to draft weekly status reports and requirements summaries. In Sprint 3, the tool hallucinated a “mandatory” compliance requirement; a business analyst caught it before it reached the Steering Group. The sponsor still wants the speed benefits but says stakeholder trust is at risk if AI issues are hidden.

Which action best balances keeping stakeholders informed about AI benefits and red flags with proportionate governance controls?

  • A. Add a short recurring AI-use update to delivery governance (what AI helped, value realised, known limits, incidents/near-misses, and what controls/HITL checks changed) and record material AI-related decisions in the decision log
  • B. Avoid discussing AI use unless stakeholders ask, to prevent unnecessary concern
  • C. Publish the full prompt library and all raw AI outputs to stakeholders to maximise transparency
  • D. Require Steering Group approval before any AI-assisted internal draft can be circulated within the team

Best answer: A

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Activation governance needs ongoing, plain-language transparency about where AI is used, what value it delivers, and what has gone wrong (or nearly went wrong). A lightweight, recurring update plus clear HITL checks and an auditable decision trail keeps stakeholders informed without overloading them or slowing delivery unnecessarily.

Keeping stakeholders informed “throughout delivery” means making AI use visible as an operational practice, not a one-off disclosure. In Activation, the balanced approach is to build a small, regular governance touchpoint (e.g., in sprint reviews/Steering packs) that reports AI-enabled benefits and highlights red flags such as hallucinations, data handling issues, or control breaches, alongside what the team changed in response.

This should be supported by proportionate controls:

  • Maintain/update the AI Assistance Plan for current use cases
  • Apply HITL review for stakeholder-facing outputs
  • Record material AI-related decisions, exceptions, and incidents in auditable logs

This increases trust and enables timely intervention without creating unnecessary administrative drag or exposing sensitive operational details.

It maintains Transparency and trust by routinely sharing benefits and red flags while keeping assurance proportionate through HITL controls and auditable decisions.


Question 3

Topic: Lifecycle Governance (Initiate to Operate)

You are in the Foundation stage of a programme to redesign a public-sector contact centre. The team wants to use a generative AI tool to draft user stories from past complaint transcripts.

Exhibit: AI Assistance Plan (draft excerpt)

Intended AI use: Summarise complaint transcripts into themes and draft user stories
Tool: Public, web-based GenAI service (no enterprise agreement)
Inputs: 6,000 complaint transcripts with names, phone numbers, and case IDs
Data note: Contract with service provider states “customer data must not be uploaded to third-party services”
Outputs: User stories and acceptance criteria for backlog

Which action is the most appropriate AIPGF control at this point?

  • A. Ask the supplier to provide a generic assurance statement about their model security
  • B. Complete a data rights/classification check and obtain data owner approval or change to an approved, secure data environment before any upload
  • C. Proceed and rely on HITL review of the generated user stories
  • D. Defer the decision until the Evaluation stage and measure output accuracy first

Best answer: B

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: The decisive factor is that the intended inputs contain sensitive personal data and the contract explicitly prohibits uploading customer data to third parties. In the Foundation stage, AIPGF expects a proportionate data availability/quality/rights assessment and documented approval decision before any AI processing occurs. If the rights or controls are not available, the AI use must be redesigned (e.g., approved environment or different data).

In AIPGF Foundation, you must establish whether the data you intend to use is available, of sufficient quality for the purpose, and—critically—whether you have the rights and approvals to process it with the chosen AI tool. Here, the inputs contain identifiable personal data and the contract prohibits uploading customer data to third-party services; using a public web-based GenAI tool would breach the stated constraint and undermine confidentiality and trust.

The appropriate control is to:

  • classify the data and confirm usage rights/constraints with the data owner
  • decide whether an approved secure environment/tooling is required
  • document the decision and required controls in the AI Assistance Plan (and related logs)

HITL review improves output quality, but it does not resolve prohibited data transfer or missing data-use rights.

Foundation governance must confirm data rights and confidentiality constraints before using AI with sensitive inputs, and block or re-scope use if rights or controls are not in place.


Question 4

Topic: Lifecycle Governance (Initiate to Operate)

During Foundation, a programme team wants an AI tool to help draft project deliverables (PID sections, supplier evaluation notes, and stakeholder updates). Drafts will include commercially sensitive information and some personal data from stakeholder registers.

Governance constraints set by the organisation:

  • No confidential/personal data may be sent to uncontrolled third-party services
  • Prompts/outputs must be retained for auditability and assurance
  • All AI-generated content must be reviewed and approved by a human before release (HITL)

Which tool-selection approach should the project NOT use?

  • A. Use a tool that supports redaction plus a documented AI Assistance Plan and human approval workflow
  • B. Use a self-hosted/on-prem model with access controls and centralised prompt/output logging
  • C. Adopt a public, consumer chatbot and let staff paste content, relying on manual copy-editing later
  • D. Use an enterprise AI service under contract with logging, no training on prompts, and defined retention

Best answer: C

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: Tool selection in AIPGF Foundation must satisfy constraints and enable assurance, not just improve speed. Where deliverables contain sensitive or personal data, governance requires controlled services, audit trails, and clear HITL approvals before release. A consumer chatbot used informally is the clearest anti-pattern because it undermines confidentiality and evidencing of AI-assisted outputs.

In the AIPGF Foundation stage, selecting AI tools is a governance decision: the tool must fit the work while meeting constraints for confidentiality, auditability, and accountable approval. Because AI outputs can be non-deterministic, assurance depends on being able to evidence how outputs were produced (prompt/output capture, retention, access control) and to apply Human-in-the-Loop review before anything is issued externally.

A public consumer chatbot used without organisational controls is typically an uncontrolled third-party service: it can create data leakage and makes it hard to retain reliable records for assurance. By contrast, contracted enterprise services, self-hosted deployments, and workflows that include redaction plus an AI Assistance Plan and explicit human approvals are all compatible with the stated constraints.

This breaches the constraint against sending sensitive data to uncontrolled third parties and provides weak auditability for assurance.


Question 5

Topic: Lifecycle Governance (Initiate to Operate)

A project used a generative AI assistant to draft meeting minutes and first-pass requirements. In the Evaluation stage, the team found:

  • 18% of requirements needed rework due to inconsistent AI outputs
  • Two near-misses where sensitive text was pasted into the tool
  • HITL review was applied inconsistently across workstreams

The issues have been logged and validated. To drive improvement for future projects, what is the BEST next step?

  • A. Move directly to Activation and operationalise an expanded tool rollout
  • B. Update the AI Assistance Plan template, prompts, training, and control checklist
  • C. Decommission the AI tool and close without further follow-up
  • D. Approve wider AI use now to recoup the rework cost

Best answer: B

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: In AIPGF Stage 3 (Evaluation), the goal is to turn evidence from outcomes and incidents into concrete improvement actions. Here, the findings point to better prompts, better training, and stronger, more consistent HITL and data-handling controls. The best next step is to update the reusable governance assets so the next project’s Foundation stage starts with improved guidance and guardrails.

Stage 3 (Evaluation) is where you capture what happened, why it happened, and what to change so future AI-assisted work is safer and more reliable. Because the issues are already logged and validated, the next step is to convert them into specific, reusable improvements that will be adopted at the start of the next lifecycle.

Typical Evaluation-to-Foundation improvements include:

  • Updating the AI Assistance Plan template (scope, decision rights, HITL checkpoints)
  • Publishing a prompt playbook/prompt library and quality criteria for outputs
  • Refreshing training for safe data handling and verification expectations
  • Strengthening control checklists (reviews, approvals, evidence/traceability)

Expanding rollout or closing without embedding these improvements either skips prerequisites or fails to realise organisational learning.

Evaluation learning should be converted into reusable governance assets (prompt guidance, training, and controls) for future Foundation planning.


Question 6

Topic: Lifecycle Governance (Initiate to Operate)

A programme team uses a generative AI assistant to draft a business case and stakeholder updates. During assurance, no one can show which tool version was used, prompts are not recorded, and there is no evidence of HITL review before content was sent externally. The sponsor pauses approvals due to stakeholder distrust, and the expected time-saving benefits are not being realised.

Which is the most likely underlying cause?

Exhibit: Stage 1 records found

AI Assistance Plan: Not created
Clarification questions log: None
Assumptions log: None
Decision log (AI use): None
  • A. The AI outputs were low quality due to hallucinations
  • B. Too many stakeholders are requesting rework and extra reviews
  • C. The sponsor is resistant to adopting AI-assisted ways of working
  • D. Stage 1 deliverables for AI use were not produced and baselined

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: The symptoms point to a governance setup failure in Stage 1 (Foundation), not a one-off execution problem. If the AI Assistance Plan and supporting clarification/assumptions/decision logs are missing, there is no agreed basis for approved tools, prompt/usage recording, or HITL approval evidence—driving distrust, stalled sign-off, and benefit shortfalls.

In AIPGF Stage 1 (Foundation), AI-assisted work needs explicit, agreed governance before AI is used to create project deliverables. An AI Assistance Plan and supporting logs (clarification questions, assumptions, and key decisions about AI use) provide the baseline for: which tools may be used, what data may be entered, what must be recorded (prompts, versions, outputs), and where HITL review/sign-off is required. In the scenario, every breakdown (unapproved tool use, undocumented prompts, missing HITL evidence, and eroding stakeholder trust) is consistent with those Stage 1 artifacts not being created and approved, leaving the team to improvise controls and making assurance impossible. The root cause is missing Foundation deliverables, not simply poor AI output quality or stakeholder behaviour.

Without an AI Assistance Plan and supporting logs, tool choice, HITL sign-off, and traceability controls remain ad hoc and un-auditable.


Question 7

Topic: Lifecycle Governance (Initiate to Operate)

You are initiating an AI-assisted reporting capability for a programme office. The AI will draft weekly status updates and highlight emerging risks by summarising:

  • last year’s lessons learned (includes supplier IP clauses)
  • the current risk register (contains names and contact details)
  • internal financial forecasts (commercially sensitive)

A pilot is due in 2 weeks. There is no approved data-processing agreement for external AI tools. An internal AI environment is available but needs a curated knowledge base.

Which approach best balances effective AI use with required governance over data availability, quality, and rights during the Foundation stage?

  • A. Classify and inventory sources, confirm usage rights, and create a redacted/curated corpus for the internal AI; restrict the pilot to approved non-sensitive content with HITL review
  • B. Use only synthetic or fabricated programme data for the pilot to avoid confidentiality issues, then swap in real data later
  • C. Use an external generative AI tool with full document uploads to maximise output quality, then add a disclaimer that results are drafts
  • D. Proceed using only the current risk register as-is to meet the pilot date, and address privacy/IP and data quality improvements after the pilot

Best answer: A

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: In Foundation, governance must confirm what data is available, fit for purpose, and lawful/contractually permitted for the intended AI use. The best-balanced approach classifies and inventories data, validates rights and sensitivity constraints, and then curates/redacts inputs so the pilot can run safely in an approved internal environment with Human-in-the-Loop review.

This scenario is primarily a data rights and suitability problem, not a model-quality problem. In the Foundation stage you should establish whether each intended source can be used for the stated AI purpose (rights, confidentiality, privacy), and whether the data quality is sufficient (completeness, consistency, timeliness) before enabling access.

A proportionate, effective approach is to:

  • inventory/classify the sources and confirm usage rights (including supplier IP and personal data)
  • define allowed vs prohibited content for the pilot
  • curate the knowledge base (redaction/minimisation and quality checks) for the internal AI
  • apply HITL approval of AI-drafted outputs before distribution

This achieves a usable pilot within constraints while maintaining confidentiality and auditability; simply moving faster by uploading sensitive data externally undermines required governance controls.

It verifies data rights and sensitivity, improves input quality via curation/redaction, and enables timely value using an internal environment with appropriate controls.


Question 8

Topic: Lifecycle Governance (Initiate to Operate)

During Activation, a project team uses an AI assistant to draft the monthly portfolio status report. The report has already been sent to the steering committee, who use it to approve funding releases. A reviewer then finds two problems: the AI included a table with supplier rate-card figures marked “confidential,” and it misreported a critical milestone as complete.

Which action is MOST appropriate to take next under AIPGF?

  • A. Refine the prompt and resend the corrected report quickly
  • B. Add an “AI-generated” disclaimer and keep using the assistant
  • C. Wait for the next reporting cycle and monitor for recurrence
  • D. Withdraw the report, suspend AI use, log and escalate incident

Best answer: D

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: This is an AI-related incident involving sensitive information exposure and a high-impact decision artifact, so the priority is containment and formal escalation. Withdrawing the report and suspending the AI assistant reduces further harm while the team records evidence, coordinates communication, and produces a corrected, human-approved version. This aligns with AIPGF’s need for clear accountability, auditability, and HITL control over decision-support outputs.

In AIPGF Activation, when an AI-assisted deliverable causes harm (e.g., confidentiality breach and incorrect status used for funding decisions), treat it as an incident, not just a quality defect. The first move is containment: stop further distribution and prevent repeat generation (pause the tool/access or switch to a manual path). Next, ensure auditability by recording what happened (inputs/outputs, timestamps, affected stakeholders), then escalate to the accountable governance roles (e.g., project manager/sponsor plus InfoSec or commercial confidentiality owner) so communication and corrective actions are authorized. Finally, correct and reissue the report under HITL and appropriate approvals, and update relevant controls (AI Assistance Plan, risk/issue logs) before resuming use. The key takeaway is “contain, escalate, correct, then restore” with evidence and decision rights preserved.

This contains the breach, preserves auditability, triggers escalation, and enables rollback/correction with accountable approval.


Question 9

Topic: Lifecycle Governance (Initiate to Operate)

A programme has used a generative AI assistant throughout delivery to draft status reports, summarise workshops, and propose risk responses. The final deliverables have been accepted, and the sponsor asks the programme manager to “close the programme this week.”

You are starting AIPGF Stage 3 (Evaluation) activities and realise the evaluation approach was not clearly documented during earlier stages.

Which clarifying question should you ask first to ensure Stage 3 (Evaluation) is properly completed and aligned to project closure?

  • A. Which AI model provider will be used on the next programme?
  • B. What success criteria and evidence are required for final sign-off, including how AI-assisted outputs will be evaluated and lessons captured?
  • C. Can the AI assistant be given full access to the organisation’s archives to improve future prompts?
  • D. Can the team skip the retrospective to meet the sponsor’s closure date?

Best answer: B

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: AIPGF Stage 3 (Evaluation) is about confirming whether the initiative achieved its intended outcomes, that AI-assisted deliverables were acceptable against agreed criteria, and capturing lessons for future use. In closure terms, it maps to acceptance/sign-off evidence, benefits/controls confirmation, and retrospective/knowledge capture before handing over or shutting down AI assistance.

Stage 3 (Evaluation) provides assurance at the end of the AI-assisted work: did the project/programme achieve the intended outcomes, were AI-assisted outputs fit for purpose, and what should be learned and carried forward. This aligns directly to closure activities such as confirming acceptance criteria, compiling closure evidence (including any AI-specific assurance), updating benefits tracking/risks for post-closure ownership, and capturing lessons learned.

If the evaluation approach was not clearly set earlier, the first step is to clarify the success criteria and what evidence is required for sign-off, explicitly covering AI-assisted outputs and how learning will be recorded. Without that, “closing this week” risks closing without demonstrating outcomes or capturing learning, undermining transparency and future governance.

Questions about future tools, broader data access, or skipping learning activities do not establish whether closure is justified.

Stage 3 (Evaluation) confirms outcomes against agreed criteria and captures learning before formal closure and handover.


Question 10

Topic: Lifecycle Governance (Initiate to Operate)

A programme used an AI assistant to recommend which change requests (CRs) to prioritise each month. At Stage 3 (Evaluation), the closure report notes concerns that CRs from smaller regions were consistently ranked lower.

Exhibit: Evaluation evidence (excerpt)

Inputs used by AI: CR text, cost estimate, benefit estimate, requester region
Output: Priority score 1–100 with short rationale
Controls used: Project manager approval of monthly priority list
Issue raised: “Region appears to influence ranking; rationale is generic”

Which action best balances ethical, efficient, and effective AI use with required governance controls?

  • A. Rely on vendor assurance; keep the process unchanged
  • B. Stop using AI for prioritisation to remove all bias risk
  • C. Run a bias/traceability review, document decisions, and update HITL
  • D. Remove region from inputs and fully automate ranking next cycle

Best answer: C

What this tests: Lifecycle Governance (Initiate to Operate)

Explanation: In Evaluation, the right optimisation is to keep benefits of AI assistance while closing assurance gaps. A targeted bias and traceability review tests whether the decision process is fair, and improves transparency by strengthening the audit trail and explanations. Updating the AI Assistance Plan and HITL rules makes accountability explicit without unnecessarily discarding the tool.

In AIPGF Evaluation, you confirm whether AI-assisted decisions were trustworthy and learn what governance needs tightening before reuse or scale-out. Here, the evidence suggests a potential fairness issue (region influencing outcomes) and weak transparency (generic rationales), so the evaluation should add assurance rather than simply accept results.

A balanced response is to:

  • Review a sample of AI recommendations versus final decisions and outcomes
  • Test for disparate impact across regions and check whether region is justified
  • Improve traceability (decision log, input provenance, prompt/version records)
  • Update the AI Assistance Plan with clearer HITL criteria and explanation needs

This preserves efficiency while strengthening Human-Centricity, Transparency, and Accountability.

It adds proportionate fairness testing and auditability (decision trace) while keeping human approval and improving transparency for future use.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026