Try 10 focused AIPGF Foundation questions on Principles for Responsible and Trustworthy AI, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | AIPGF Foundation |
| Topic area | Principles for Responsible and Trustworthy AI |
| Blueprint weight | 13% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Principles for Responsible and Trustworthy AI for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Principles for Responsible and Trustworthy AI
You are the PM for a low-risk internal programme using an approved internal GenAI assistant to speed up reporting. Assurance asks you to “apply the full AI governance pack” used on a recent high-impact AI product.
Exhibit: AI Assistance Plan (excerpt)
Use: Draft weekly status report + action log
Data: Internal project data; no personal data; confidential
Tool: Approved internal GenAI; no external retention/training
Decision impact: Low (errors could misstate dates/actions)
HITL: PM reviews/edits before sending externally
Proposed controls: data classification check; spot-check vs sources
Assurance request: independent model validation + monthly audit
What is the best next action to balance governance rigor with delivery efficiency, based on the exhibit?
Best answer: A
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Under the Adaptability principle, governance should be proportional to the AI use case, data sensitivity, and impact of errors. The exhibit indicates low decision impact, no personal data, and an approved internal tool with HITL review, so a lightweight, documented control set is appropriate. The key is agreeing and recording the tailoring decision rather than defaulting to the heaviest available process.
Adaptability (tailoring) means selecting governance controls that are “fit for purpose” for the specific AI-assisted activity, so assurance effort does not block value delivery. In the exhibit, the AI is used to draft routine communications, uses confidential but non-personal internal data, and has clear HITL review before external sharing; the impact of errors is low.
A proportionate next step is to:
This preserves auditability and trust while avoiding heavyweight validation intended for high-impact AI products.
The exhibit supports a proportionate, documented control set (e.g., HITL review and source spot-checking) rather than heavyweight assurance designed for high-impact AI.
Topic: Principles for Responsible and Trustworthy AI
A PMO proposes using a generative AI assistant to draft project status reports and update the RAID log from meeting notes. The tool may be changed during delivery as new features become available.
Stakeholders require assurance that (1) humans remain accountable for decisions, (2) AI use is visible and explainable to recipients, and (3) the approach can be adjusted safely as performance and risks change.
Which artifact/evidence would best validate readiness against these expectations?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: An AI Assistance Plan is the most direct, auditable evidence that the proposed usage is governed in line with Human-Centricity, Transparency, and Adaptability. It can specify HITL approval points and accountability, what disclosures/explanations accompany AI-assisted outputs, and how the team will monitor outcomes and manage changes to prompts, data, or tooling.
To evaluate an AI usage approach with the three principles together, you need evidence that the project has (1) defined human decision rights and review checkpoints (Human-Centricity), (2) made AI involvement and limitations visible to intended audiences (Transparency), and (3) built in monitoring and controlled change so the approach can evolve safely as the tool, data, or context changes (Adaptability).
An approved AI Assistance Plan is purpose-built to capture these controls in one place, typically including:
Other artifacts can support assurance, but they do not, by themselves, validate all three principles in this scenario.
It provides documented controls that jointly evidence human accountability, transparency to stakeholders, and adaptation/monitoring arrangements.
Topic: Principles for Responsible and Trustworthy AI
A programme team is piloting an AI assistant to draft change requests and recommend approval decisions for a new citizen-facing service. Some changes could materially affect personal data processing and programme funding.
The AI Assistance Plan states: “High-impact approvals require a named accountable human approver (HITL). AI may recommend, but must not be the final decision-maker.”
Which action is INCORRECT under this human-centricity requirement?
Best answer: B
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Human-centricity in AIPGF means accountable humans retain decision rights for high-impact approvals, even when AI provides recommendations. In this scenario, privacy- and funding-affecting changes are explicitly high impact, so automation must stop short of final approval and keep a human in the loop with an auditable record.
This scenario tests when Human-Centricity overrides automation: when an outcome is high impact, the accountable human must make (and be able to justify) the decision. The AI Assistance Plan explicitly requires HITL for high-impact approvals, so AI can support speed and consistency by drafting and recommending, but it cannot become the final approver.
Practical governance controls that align with this include:
The key takeaway is that “high confidence” or similar AI scoring does not transfer accountability or decision rights away from humans for high-impact approvals.
High-impact approvals must remain with an accountable human; confidence scores do not replace decision rights.
Topic: Principles for Responsible and Trustworthy AI
During Foundation, a project team wants to use a generative AI tool to draft monthly client progress reports.
Constraints:
Stakeholder trust is low due to a recent AI-related incident. What is the BEST next action?
Best answer: D
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Adaptability means tailoring governance proportionately to context (risk, data sensitivity, stakeholder trust, and intended use). Here, Confidential data and low trust require scaling governance up in Foundation so the Activation gate has clear decision rights, HITL accountability, and evidence of controls. The best next action is to document and approve that tailored approach before using AI for client-facing outputs.
The AIPGF principle of Adaptability requires governance to scale with the project’s context rather than applying a one-size-fits-all approach. In Foundation, that means deciding (and documenting) what controls, roles, and evidence are needed to use AI safely and credibly.
With Confidential client data and low stakeholder trust, governance should be scaled up before Activation by:
A fast pilot that violates tool/data rules or removes HITL undermines assurance; a blanket pause until enterprise redesign is disproportionate to the immediate project decision.
This applies Adaptability by scaling governance to high-sensitivity data and low trust, and produces the agreed controls and approvals needed to pass the Activation gate.
Topic: Principles for Responsible and Trustworthy AI
Midway through a programme, the team expands AI use from meeting-note summarisation to drafting sections of the business case and procurement documents. The programme manager updates the AI Assistance Plan, re-assesses risks and decision rights for the new tasks, and adjusts HITL review/approval steps before the outputs are used.
Which AIPGF principle is primarily being applied?
Best answer: A
What this tests: Principles for Responsible and Trustworthy AI
Explanation: This situation is about changing AI usage during delivery and then proportionally updating governance so the new AI-assisted outputs remain controlled and assured. Updating the AI Assistance Plan, revisiting risks/decision rights, and tightening HITL approvals are examples of tailoring governance to the new AI scope. That is the essence of the Adaptability principle in AIPGF.
The Adaptability principle requires governance to be tailored and re-tailored as AI usage evolves (new tools, new tasks, expanded scope, or higher-impact deliverables). In the scenario, AI moves into business-case and procurement content, increasing the consequences of errors, bias, confidentiality leakage, and inappropriate reliance. Good AIPGF practice is therefore to adjust governance, not to “freeze” the original controls.
Typical Adaptability actions include:
The key point is proportional re-governance triggered by a change in AI usage, rather than focusing mainly on disclosure or user experience design.
It tailors governance controls when AI use changes in scope, tools, or tasks during delivery.
Topic: Principles for Responsible and Trustworthy AI
A programme team is preparing to use a generative AI assistant to draft weekly status reports and risk updates. You review the following excerpt from the AI Assistance Plan.
AI Assistance Plan (excerpt)
Use cases: Draft status report; propose RAID updates
Inputs: Workstream emails, meeting notes
HITL review: "Optional when confidence score is high"
Approval: "PMO analyst publishes to Steering Pack"
Decision log: "Auto-captured from AI outputs"
Quality check: "Spot-check monthly"
Which interpretation or next action best addresses a sign of overreliance on AI while applying the Human-Centricity principle?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: The exhibit shows multiple overreliance indicators: optional human review, AI outputs feeding the decision log, and publishing without accountable human approval. Under Human-Centricity, AI may assist, but humans must retain decision-making and be clearly accountable. Making HITL review mandatory with defined approvers restores human judgment at the point where decisions and communications are finalized.
Overreliance is evident when AI outputs become the default “source of truth” and humans stop actively evaluating, challenging, and owning decisions. In the excerpt, review is optional, the decision log is auto-derived from AI text, and a PMO analyst can publish to the steering pack—together this weakens accountable human judgment.
A Human-Centricity control is to harden HITL and decision rights at the point of commitment and communication:
Confidence scores and disclaimers can be supplementary, but they do not replace active human accountability.
Human judgment is being bypassed by optional review and delegated publication, so enforce clear human review and sign-off decision rights.
Topic: Principles for Responsible and Trustworthy AI
In AIPGF, which principle most directly supports resolving a conflict between delivery speed and governance by tailoring scope, controls, and approval steps to the project’s AI risk and context?
Best answer: A
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Adaptability is about adjusting governance to fit the AI use case, maturity, and risk so that assurance remains effective without creating unnecessary delay. It is the principle that legitimises changing scope, controls, and approval intensity to resolve speed-versus-governance trade-offs in AI-assisted work.
The AIPGF principle of Adaptability focuses on applying governance and assurance proportionately to the specific AI context (use case, uncertainty, data sensitivity, stakeholder impact, and organisational maturity). When delivery speed conflicts with governance, Adaptability supports changing what is governed and how—such as narrowing AI use to lower-risk tasks, selecting lighter-weight controls for low-risk outputs, or escalating approvals only for higher-impact AI-assisted deliverables. This preserves trustworthy outcomes while avoiding a “one-size-fits-all” process that can either over-control (slowing delivery) or under-control (increasing risk). The key takeaway is that Adaptability enables risk-based tailoring of controls and decision rights, whereas other concepts focus on different governance needs.
Adaptability is the AIPGF principle that promotes proportionate, context-sensitive governance and assurance to balance speed with risk.
Topic: Principles for Responsible and Trustworthy AI
During Activation, a programme office starts using a generative AI assistant to draft weekly status updates and risk narratives for an executive steering committee. To save time, the Programme Manager configures the workflow so AI-generated text is copied into the pack without a HITL review or sign-off, and no corrections are logged.
What is the most likely near-term impact of this omission?
Best answer: D
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Human-Centricity requires human oversight of AI-assisted deliverables to catch errors, bias, or unsafe recommendations before they affect decisions. Removing HITL review makes it likely that flawed content is shared with senior stakeholders. The earliest consequence is reduced trust and rapid rework to correct and reassure decision-makers.
This scenario tests Human-Centricity: AI-assisted outputs that influence decisions should have an appropriate Human-in-the-Loop (HITL) control to detect and correct hallucinations, biased wording, or unsafe recommendations before release. In the near term, an executive pack containing unreviewed AI-generated risk narratives can quickly propagate inaccurate or insensitive statements, triggering immediate stakeholder concern and loss of confidence in the programme office’s assurance.
Practical HITL controls include:
Speed gains from skipping review do not compensate for the immediate trust impact when an error is noticed, especially in governance communications.
Without HITL review, AI errors can be published directly, quickly undermining trust and forcing rapid correction.
Topic: Principles for Responsible and Trustworthy AI
A project team is using a generative AI tool to propose risk responses and change-impact recommendations for a hospital systems rollout. The AI Assistance Plan requires HITL review because unsafe or biased recommendations could affect patient care.
Which action should the project manager NOT take when setting up human review to detect and correct AI errors or unsafe recommendations?
Best answer: D
What this tests: Principles for Responsible and Trustworthy AI
Explanation: Human-Centricity in AIPGF means AI-assisted outputs remain accountable to humans, especially where recommendations could be unsafe. HITL review must validate the AI’s reasoning and content against trusted sources and context, not proxy signals like confidence. Strong governance also captures learnings so recurring errors are reduced over time.
This scenario is about applying the Human-Centricity principle through effective HITL controls. When AI generates risk responses or change-impact recommendations, humans must detect hallucinations, bias, and unsafe suggestions by reviewing the content against evidence and operational/safety criteria. Tool-provided confidence or similar indicators are not a substitute for accountable human judgement.
Practical HITL setup typically includes:
The key takeaway is that governance relies on accountable human approval and traceable evidence, not automated self-assurance signals from the AI.
Confidence cues are not assurance; a qualified human must review content against evidence and safety criteria before approval.
Topic: Principles for Responsible and Trustworthy AI
A project team is piloting a generative AI assistant to draft risk register entries and stakeholder updates. In the past two weeks, the AI has produced several incorrect risk ratings and occasionally biased phrasing, yet drafts were sent to stakeholders without any recorded human sign-off. When challenged, the team cannot reproduce how outputs were created because prompts and versions were not captured, and some staff used personal (unapproved) AI accounts to “save time.” The sponsor pauses rollout due to loss of trust and limited realised benefit.
What is the most likely underlying cause of this governance breakdown?
Best answer: C
What this tests: Principles for Responsible and Trustworthy AI
Explanation: The pattern of unsafe outputs being issued, missing sign-off, and weak traceability indicates a failure to design and enforce Human-Centricity through Human-in-the-Loop controls. An AI Assistance Plan should define where human review is mandatory, who approves, and what evidence (e.g., prompt/output records) is retained. With those controls, reviewers can detect and correct errors, bias, or unsafe recommendations before release.
Under the Human-Centricity principle, AI-assisted project outputs should remain under meaningful human control, with clear accountability for what is issued externally. The symptoms (unchecked errors/bias reaching stakeholders, missing sign-off, inability to reproduce outputs, and workarounds with unapproved accounts) most strongly point to an upstream governance design gap: HITL was not explicitly required and operationalised.
A proportionate HITL setup typically includes:
The key takeaway is that traceability and stakeholder trust depend on deliberate HITL design, not on ad hoc user behaviour.
Without defined HITL checkpoints and accountable sign-off, errors and bias can pass into deliverables unchecked.
Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.