AIPGF Foundation: Principles for Responsible and Trustworthy AI

Try 10 focused AIPGF Foundation questions on Principles for Responsible and Trustworthy AI, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Foundation
Topic areaPrinciples for Responsible and Trustworthy AI
Blueprint weight13%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Principles for Responsible and Trustworthy AI for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Principles for Responsible and Trustworthy AI

You are the PM for a low-risk internal programme using an approved internal GenAI assistant to speed up reporting. Assurance asks you to “apply the full AI governance pack” used on a recent high-impact AI product.

Exhibit: AI Assistance Plan (excerpt)

Use: Draft weekly status report + action log
Data: Internal project data; no personal data; confidential
Tool: Approved internal GenAI; no external retention/training
Decision impact: Low (errors could misstate dates/actions)
HITL: PM reviews/edits before sending externally
Proposed controls: data classification check; spot-check vs sources
Assurance request: independent model validation + monthly audit

What is the best next action to balance governance rigor with delivery efficiency, based on the exhibit?

  • A. Tailor controls to the stated low-impact use and document the rationale
  • B. Pause AI use until the enterprise AI policy is updated
  • C. Remove the proposed controls because the internal tool is approved
  • D. Adopt the full AI governance pack before any further use

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Under the Adaptability principle, governance should be proportional to the AI use case, data sensitivity, and impact of errors. The exhibit indicates low decision impact, no personal data, and an approved internal tool with HITL review, so a lightweight, documented control set is appropriate. The key is agreeing and recording the tailoring decision rather than defaulting to the heaviest available process.

Adaptability (tailoring) means selecting governance controls that are “fit for purpose” for the specific AI-assisted activity, so assurance effort does not block value delivery. In the exhibit, the AI is used to draft routine communications, uses confidential but non-personal internal data, and has clear HITL review before external sharing; the impact of errors is low.

A proportionate next step is to:

  • confirm the risk level and intended scope of AI assistance
  • keep lightweight controls that address the main risks (confidentiality and accuracy), such as data classification checks and source spot-checking
  • document the tailoring decision (what is in/out, who approves, and why the heavier controls are not warranted)

This preserves auditability and trust while avoiding heavyweight validation intended for high-impact AI products.

The exhibit supports a proportionate, documented control set (e.g., HITL review and source spot-checking) rather than heavyweight assurance designed for high-impact AI.


Question 2

Topic: Principles for Responsible and Trustworthy AI

A PMO proposes using a generative AI assistant to draft project status reports and update the RAID log from meeting notes. The tool may be changed during delivery as new features become available.

Stakeholders require assurance that (1) humans remain accountable for decisions, (2) AI use is visible and explainable to recipients, and (3) the approach can be adjusted safely as performance and risks change.

Which artifact/evidence would best validate readiness against these expectations?

  • A. AIPG-CMM assessment score for the organisation’s AI maturity
  • B. Decision log recording the choice of AI tool and vendor
  • C. Approved AI Assistance Plan with HITL, disclosure, and change triggers
  • D. Benefits Tracker showing hours saved from AI-drafted reports

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: An AI Assistance Plan is the most direct, auditable evidence that the proposed usage is governed in line with Human-Centricity, Transparency, and Adaptability. It can specify HITL approval points and accountability, what disclosures/explanations accompany AI-assisted outputs, and how the team will monitor outcomes and manage changes to prompts, data, or tooling.

To evaluate an AI usage approach with the three principles together, you need evidence that the project has (1) defined human decision rights and review checkpoints (Human-Centricity), (2) made AI involvement and limitations visible to intended audiences (Transparency), and (3) built in monitoring and controlled change so the approach can evolve safely as the tool, data, or context changes (Adaptability).

An approved AI Assistance Plan is purpose-built to capture these controls in one place, typically including:

  • HITL steps and accountable approvers for AI-assisted outputs
  • Disclosure/labeling guidance and traceability of AI contributions
  • Monitoring, incident handling, and triggers for revising usage (e.g., quality drops, new risks, tool updates)

Other artifacts can support assurance, but they do not, by themselves, validate all three principles in this scenario.

It provides documented controls that jointly evidence human accountability, transparency to stakeholders, and adaptation/monitoring arrangements.


Question 3

Topic: Principles for Responsible and Trustworthy AI

A programme team is piloting an AI assistant to draft change requests and recommend approval decisions for a new citizen-facing service. Some changes could materially affect personal data processing and programme funding.

The AI Assistance Plan states: “High-impact approvals require a named accountable human approver (HITL). AI may recommend, but must not be the final decision-maker.”

Which action is INCORRECT under this human-centricity requirement?

  • A. Log the human decision rationale alongside the AI output
  • B. Allow auto-approval for high-impact changes when confidence is high
  • C. Route AI recommendations to the accountable approver for sign-off
  • D. Require HITL review for changes affecting privacy or funding

Best answer: B

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Human-centricity in AIPGF means accountable humans retain decision rights for high-impact approvals, even when AI provides recommendations. In this scenario, privacy- and funding-affecting changes are explicitly high impact, so automation must stop short of final approval and keep a human in the loop with an auditable record.

This scenario tests when Human-Centricity overrides automation: when an outcome is high impact, the accountable human must make (and be able to justify) the decision. The AI Assistance Plan explicitly requires HITL for high-impact approvals, so AI can support speed and consistency by drafting and recommending, but it cannot become the final approver.

Practical governance controls that align with this include:

  • Workflow gating so approvals are routed to a named human approver
  • Mandatory HITL review triggers for privacy, safety, or material financial impacts
  • Decision logging that captures the AI output used and the human rationale

The key takeaway is that “high confidence” or similar AI scoring does not transfer accountability or decision rights away from humans for high-impact approvals.

High-impact approvals must remain with an accountable human; confidence scores do not replace decision rights.


Question 4

Topic: Principles for Responsible and Trustworthy AI

During Foundation, a project team wants to use a generative AI tool to draft monthly client progress reports.

Constraints:

  • Reports contain Confidential client data.
  • Policy allows Confidential data only in an approved internal AI tool.
  • A named Human-in-the-Loop (HITL) must review AI text before release.
  • The project cannot pass the Activation gate without an agreed AI governance approach and accountable approver.

Stakeholder trust is low due to a recent AI-related incident. What is the BEST next action?

  • A. Pause all AI use until an enterprise-wide AI governance framework is redesigned and rolled out
  • B. Allow the approved internal tool, but treat AI drafts as normal documents with no named HITL to reduce delays
  • C. Proceed with the fastest public LLM for one reporting cycle to rebuild stakeholder confidence with a quick win
  • D. Update the AI Assistance Plan with a risk-based tailoring decision, define HITL review and evidence, restrict use to the approved tool, and obtain data-owner approval for the Activation gate

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Adaptability means tailoring governance proportionately to context (risk, data sensitivity, stakeholder trust, and intended use). Here, Confidential data and low trust require scaling governance up in Foundation so the Activation gate has clear decision rights, HITL accountability, and evidence of controls. The best next action is to document and approve that tailored approach before using AI for client-facing outputs.

The AIPGF principle of Adaptability requires governance to scale with the project’s context rather than applying a one-size-fits-all approach. In Foundation, that means deciding (and documenting) what controls, roles, and evidence are needed to use AI safely and credibly.

With Confidential client data and low stakeholder trust, governance should be scaled up before Activation by:

  • documenting intended AI use and boundaries in an AI Assistance Plan
  • assigning an accountable approver (e.g., data owner) and a named HITL reviewer
  • selecting only approved tools for the data classification and defining required evidence (logging, review records)

A fast pilot that violates tool/data rules or removes HITL undermines assurance; a blanket pause until enterprise redesign is disproportionate to the immediate project decision.

This applies Adaptability by scaling governance to high-sensitivity data and low trust, and produces the agreed controls and approvals needed to pass the Activation gate.


Question 5

Topic: Principles for Responsible and Trustworthy AI

Midway through a programme, the team expands AI use from meeting-note summarisation to drafting sections of the business case and procurement documents. The programme manager updates the AI Assistance Plan, re-assesses risks and decision rights for the new tasks, and adjusts HITL review/approval steps before the outputs are used.

Which AIPGF principle is primarily being applied?

  • A. Adaptability
  • B. Activation stage intent
  • C. Human-Centricity
  • D. Transparency

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: This situation is about changing AI usage during delivery and then proportionally updating governance so the new AI-assisted outputs remain controlled and assured. Updating the AI Assistance Plan, revisiting risks/decision rights, and tightening HITL approvals are examples of tailoring governance to the new AI scope. That is the essence of the Adaptability principle in AIPGF.

The Adaptability principle requires governance to be tailored and re-tailored as AI usage evolves (new tools, new tasks, expanded scope, or higher-impact deliverables). In the scenario, AI moves into business-case and procurement content, increasing the consequences of errors, bias, confidentiality leakage, and inappropriate reliance. Good AIPGF practice is therefore to adjust governance, not to “freeze” the original controls.

Typical Adaptability actions include:

  • Update the AI Assistance Plan to reflect the new use cases and constraints
  • Reassess risks and add/adjust controls proportionate to impact
  • Reconfirm accountable approvers and strengthen HITL review where needed
  • Record decisions/changes so assurance and auditability remain intact

The key point is proportional re-governance triggered by a change in AI usage, rather than focusing mainly on disclosure or user experience design.

It tailors governance controls when AI use changes in scope, tools, or tasks during delivery.


Question 6

Topic: Principles for Responsible and Trustworthy AI

A programme team is preparing to use a generative AI assistant to draft weekly status reports and risk updates. You review the following excerpt from the AI Assistance Plan.

AI Assistance Plan (excerpt)
Use cases: Draft status report; propose RAID updates
Inputs: Workstream emails, meeting notes
HITL review: "Optional when confidence score is high"
Approval: "PMO analyst publishes to Steering Pack"
Decision log: "Auto-captured from AI outputs"
Quality check: "Spot-check monthly"

Which interpretation or next action best addresses a sign of overreliance on AI while applying the Human-Centricity principle?

  • A. Increase the AI confidence-score threshold to reduce errors
  • B. Restrict the AI to summarising only, and stop proposing RAID updates
  • C. Make HITL review mandatory with defined approvers before publication
  • D. Keep the process but add a disclaimer that content is AI-generated

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The exhibit shows multiple overreliance indicators: optional human review, AI outputs feeding the decision log, and publishing without accountable human approval. Under Human-Centricity, AI may assist, but humans must retain decision-making and be clearly accountable. Making HITL review mandatory with defined approvers restores human judgment at the point where decisions and communications are finalized.

Overreliance is evident when AI outputs become the default “source of truth” and humans stop actively evaluating, challenging, and owning decisions. In the excerpt, review is optional, the decision log is auto-derived from AI text, and a PMO analyst can publish to the steering pack—together this weakens accountable human judgment.

A Human-Centricity control is to harden HITL and decision rights at the point of commitment and communication:

  • Require human review for each AI-assisted status/RAID update
  • Specify accountable approver(s) (e.g., PM/workstream leads)
  • Ensure the decision log is confirmed/edited by a human, not auto-final

Confidence scores and disclaimers can be supplementary, but they do not replace active human accountability.

Human judgment is being bypassed by optional review and delegated publication, so enforce clear human review and sign-off decision rights.


Question 7

Topic: Principles for Responsible and Trustworthy AI

In AIPGF, which principle most directly supports resolving a conflict between delivery speed and governance by tailoring scope, controls, and approval steps to the project’s AI risk and context?

  • A. Adaptability
  • B. Transparency
  • C. Human-Centricity
  • D. Human-in-the-Loop (HITL)

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Adaptability is about adjusting governance to fit the AI use case, maturity, and risk so that assurance remains effective without creating unnecessary delay. It is the principle that legitimises changing scope, controls, and approval intensity to resolve speed-versus-governance trade-offs in AI-assisted work.

The AIPGF principle of Adaptability focuses on applying governance and assurance proportionately to the specific AI context (use case, uncertainty, data sensitivity, stakeholder impact, and organisational maturity). When delivery speed conflicts with governance, Adaptability supports changing what is governed and how—such as narrowing AI use to lower-risk tasks, selecting lighter-weight controls for low-risk outputs, or escalating approvals only for higher-impact AI-assisted deliverables. This preserves trustworthy outcomes while avoiding a “one-size-fits-all” process that can either over-control (slowing delivery) or under-control (increasing risk). The key takeaway is that Adaptability enables risk-based tailoring of controls and decision rights, whereas other concepts focus on different governance needs.

Adaptability is the AIPGF principle that promotes proportionate, context-sensitive governance and assurance to balance speed with risk.


Question 8

Topic: Principles for Responsible and Trustworthy AI

During Activation, a programme office starts using a generative AI assistant to draft weekly status updates and risk narratives for an executive steering committee. To save time, the Programme Manager configures the workflow so AI-generated text is copied into the pack without a HITL review or sign-off, and no corrections are logged.

What is the most likely near-term impact of this omission?

  • A. The programme will be unable to retrain the AI model, leading to performance decline over subsequent releases
  • B. Benefits realisation will improve because removing review shortens cycle time and increases adoption maturity
  • C. The organisation will automatically achieve stronger auditability because AI outputs are time-stamped by default
  • D. A misleading or biased statement reaches executives, causing immediate loss of confidence and urgent rework

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Human-Centricity requires human oversight of AI-assisted deliverables to catch errors, bias, or unsafe recommendations before they affect decisions. Removing HITL review makes it likely that flawed content is shared with senior stakeholders. The earliest consequence is reduced trust and rapid rework to correct and reassure decision-makers.

This scenario tests Human-Centricity: AI-assisted outputs that influence decisions should have an appropriate Human-in-the-Loop (HITL) control to detect and correct hallucinations, biased wording, or unsafe recommendations before release. In the near term, an executive pack containing unreviewed AI-generated risk narratives can quickly propagate inaccurate or insensitive statements, triggering immediate stakeholder concern and loss of confidence in the programme office’s assurance.

Practical HITL controls include:

  • Assign a human approver for AI-assisted content before circulation
  • Use a checklist for factual accuracy, tone, bias, and safety
  • Record material edits/decisions (evidence of due diligence)

Speed gains from skipping review do not compensate for the immediate trust impact when an error is noticed, especially in governance communications.

Without HITL review, AI errors can be published directly, quickly undermining trust and forcing rapid correction.


Question 9

Topic: Principles for Responsible and Trustworthy AI

A project team is using a generative AI tool to propose risk responses and change-impact recommendations for a hospital systems rollout. The AI Assistance Plan requires HITL review because unsafe or biased recommendations could affect patient care.

Which action should the project manager NOT take when setting up human review to detect and correct AI errors or unsafe recommendations?

  • A. Record human corrections and update the AI use guidance accordingly
  • B. Require a domain SME to verify outputs against source evidence
  • C. Use a second reviewer for high-impact recommendations before adoption
  • D. Approve AI recommendations using only the tool’s confidence indicators

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Human-Centricity in AIPGF means AI-assisted outputs remain accountable to humans, especially where recommendations could be unsafe. HITL review must validate the AI’s reasoning and content against trusted sources and context, not proxy signals like confidence. Strong governance also captures learnings so recurring errors are reduced over time.

This scenario is about applying the Human-Centricity principle through effective HITL controls. When AI generates risk responses or change-impact recommendations, humans must detect hallucinations, bias, and unsafe suggestions by reviewing the content against evidence and operational/safety criteria. Tool-provided confidence or similar indicators are not a substitute for accountable human judgement.

Practical HITL setup typically includes:

  • Assigning qualified reviewers with decision rights for AI-assisted recommendations
  • Verifying outputs against source materials and project/safety constraints
  • Using stronger review (e.g., second review) for higher-impact items
  • Logging corrections to improve prompts, guidance, and assurance evidence

The key takeaway is that governance relies on accountable human approval and traceable evidence, not automated self-assurance signals from the AI.

Confidence cues are not assurance; a qualified human must review content against evidence and safety criteria before approval.


Question 10

Topic: Principles for Responsible and Trustworthy AI

A project team is piloting a generative AI assistant to draft risk register entries and stakeholder updates. In the past two weeks, the AI has produced several incorrect risk ratings and occasionally biased phrasing, yet drafts were sent to stakeholders without any recorded human sign-off. When challenged, the team cannot reproduce how outputs were created because prompts and versions were not captured, and some staff used personal (unapproved) AI accounts to “save time.” The sponsor pauses rollout due to loss of trust and limited realised benefit.

What is the most likely underlying cause of this governance breakdown?

  • A. Prompts and outputs were not documented and auditable
  • B. AI non-determinism makes errors unavoidable in project deliverables
  • C. AI Assistance Plan omitted HITL review and approval requirements
  • D. Stakeholders are resistant to adopting AI-generated deliverables

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The pattern of unsafe outputs being issued, missing sign-off, and weak traceability indicates a failure to design and enforce Human-Centricity through Human-in-the-Loop controls. An AI Assistance Plan should define where human review is mandatory, who approves, and what evidence (e.g., prompt/output records) is retained. With those controls, reviewers can detect and correct errors, bias, or unsafe recommendations before release.

Under the Human-Centricity principle, AI-assisted project outputs should remain under meaningful human control, with clear accountability for what is issued externally. The symptoms (unchecked errors/bias reaching stakeholders, missing sign-off, inability to reproduce outputs, and workarounds with unapproved accounts) most strongly point to an upstream governance design gap: HITL was not explicitly required and operationalised.

A proportionate HITL setup typically includes:

  • Defined review checkpoints for AI-assisted artifacts
  • Named accountable approver(s) and decision rights
  • Minimum quality/safety checks (accuracy, bias, suitability)
  • Traceability (prompt/version/output capture) to support challenge and assurance

The key takeaway is that traceability and stakeholder trust depend on deliberate HITL design, not on ad hoc user behaviour.

Without defined HITL checkpoints and accountable sign-off, errors and bias can pass into deliverables unchecked.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026