AIPGF Practitioner: Principles for Responsible and Trustworthy AI

Try 10 focused AIPGF Practitioner questions on Principles for Responsible and Trustworthy AI, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Practitioner
Topic areaPrinciples for Responsible and Trustworthy AI
Blueprint weight13%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Principles for Responsible and Trustworthy AI for AIPGF Practitioner. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Principles for Responsible and Trustworthy AI

A project team is delivering a public-facing customer support knowledge base. To meet the AIPGF Transparency principle, the sponsor asks for documentation that lets non-technical stakeholders see where GenAI was used, what it produced, and how humans assured the outputs.

Which approach should the project team NOT use?

  • A. Hide GenAI use to avoid stakeholder scrutiny
  • B. Maintain an AI usage register linking AI to deliverables
  • C. Annotate AI-assisted content with provenance and human review
  • D. Retain prompts/outputs, versions, and approval evidence for audit

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Transparency in AIPGF requires clear, stakeholder-friendly visibility of AI involvement: where it was used, what it generated, and what human checks were applied. Practices that create traceable links between AI use and specific outputs support assurance and accountability. Any attempt to conceal AI use prevents informed oversight and undermines trust.

The learning objective is to document AI usage so stakeholders can understand where AI was used and what was produced. Under the Transparency principle, AI-assisted work should be discoverable and explainable at the level of project deliverables (not just technical teams). This typically means maintaining records that connect each AI use case to the output, the source inputs (as appropriate), and the human-in-the-loop review/approval that made the output acceptable for release.

Practical evidence usually includes:

  • An AI usage register or AI Assistance Plan mapped to deliverables
  • Provenance notes on AI-assisted artifacts (what AI did vs. what humans did)
  • Retained prompts/outputs, versions, and approvals to support auditability

The key takeaway is that transparency is achieved through explicit documentation and traceability, not by downplaying or hiding AI involvement.

Concealing AI involvement breaks transparency because stakeholders cannot understand where AI was used or what it produced.


Question 2

Topic: Principles for Responsible and Trustworthy AI

A bank’s complaints team uses GenAI to draft responses to disputed fees. The project is in early live use and a spike in escalations has been reported.

Exhibit: AI Assistance Plan + decision log (excerpt)

AI use: Draft customer response letter
Decision owner: Complaints manager
Required human check: Confirm facts vs case file; confirm tone
Approval evidence: Reviewer name + timestamp in log
Last 20 letters: 18 sent with Reviewer="N/A"; avg time 45 sec
Incidents: 3 customers claim incorrect fee dates cited

Based on the exhibit, what is the best next governance action to address overreliance on AI while maintaining human judgment?

  • A. Tune the prompt to make outputs more consistent and faster to send
  • B. Accept the risk because only drafts are generated and issues can be handled as complaints arrive
  • C. Reinstate a HITL approval step with recorded reviewer sign-off before sending
  • D. Add a disclaimer that letters were AI-assisted and continue current workflow

Best answer: C

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The exhibit indicates process drift: letters are being sent with no reviewer recorded despite a required human check, and incidents show factual errors escaping. Under Human-Centricity, governance should ensure meaningful human oversight at the decision point, with clear accountability and audit evidence. Enforcing HITL sign-off before release is the most direct control to prevent overreliance.

A key sign of overreliance is when “AI-assisted” work becomes effectively automated in practice (e.g., reviewer marked “N/A”) and human judgment is no longer applied where it was intended. Here, the plan explicitly requires a human check against the case file and sets an audit requirement (reviewer name + timestamp), yet most items bypass that control and errors are reaching customers.

The Human-Centricity response is to re-establish meaningful HITL at the point of customer impact:

  • Stop/contain the bypass (no-send without review)
  • Make decision rights explicit (who must approve)
  • Require evidence in the log to support assurance and audit

Disclaimers or prompt tweaks don’t restore human accountability for correctness and appropriateness before release.

The log shows the required human check is being bypassed, so enforcing and evidencing human approval directly restores human judgment.


Question 3

Topic: Principles for Responsible and Trustworthy AI

A public-facing GenAI assistant is being built to draft customer email replies for a regulated utility. In Activation, Marketing pushes for launch next week. The AIPGF lead proposes keeping the go/no-go gate, completing an AI Assistance Plan and decision log, and using HITL review, but de-scoping to a 50-agent internal pilot with auto-send disabled. The sponsor approves this plan to meet the date.

What is the most likely near-term impact of this decision?

  • A. Limited early value with preserved auditability and reduced risk exposure
  • B. Maximum benefit realization immediately through end-to-end email automation
  • C. Lower auditability because documentation can be deferred until after launch
  • D. Significant regulatory penalties triggered by public transparency failures

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: This is a speed-versus-governance tradeoff resolved by reducing scope instead of removing controls. An internal pilot with HITL and disabled auto-send limits harm while still realizing some benefits on schedule. Completing the AI Assistance Plan and decision log maintains transparency and auditability for assurance activities.

In AIPGF, when time pressure conflicts with responsible governance, a common adaptation is to reduce operational exposure (scope) while retaining essential controls and approvals. Here, moving from a public launch to an internal, HITL pilot lowers immediate risk (fewer users, no automated sending) while still allowing early value delivery. Keeping the go/no-go gate and producing key artifacts supports Transparency and auditability, making it easier to justify decisions and demonstrate appropriate oversight.

Practical pattern:

  • De-scope initial release to lower-risk functionality/users
  • Retain key controls (HITL, logging, approvals)
  • Timebox learning and schedule the next Evaluation checkpoint

The near-term consequence is constrained benefits, not a governance failure.

De-scoping to an internal HITL pilot keeps core controls and evidence while delivering some value quickly with contained risk.


Question 4

Topic: Principles for Responsible and Trustworthy AI

You are in the Activation stage for a GenAI assistant that drafts adverse-action letters for a retail bank (risk tier: High). Policy requires HITL: a human must be accountable for the final wording sent to customers, and internal audit will review evidence in 4 weeks. In a 1-week pilot, agents increasingly copy/paste AI output with minimal editing, and two letters contained unsupported reasons. You cannot change the model or obtain training data; only workflow, access controls, and logging can be changed. What is the BEST next governance action?

  • A. Implement mandatory human approval with a review checklist and decision log
  • B. Stop the pilot until the vendor provides full model transparency
  • C. Allow auto-send when the AI confidence score is high enough
  • D. Continue the pilot but remind agents they remain accountable

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The key risk signal is overreliance: humans are deferring to AI outputs in a high-risk, customer-impacting process. The best next action is to enforce Human-Centricity by hard-wiring HITL approval into the workflow and capturing auditable evidence of human review and rationale before sending letters.

Overreliance shows up when people defer to AI output (copy/paste, reduced scrutiny) and errors slip into consequential decisions. In the Activation stage, the most effective control is to make human judgment explicit, accountable, and auditable at the point of use—especially when you cannot change the underlying model.

Implement a workflow gate that requires:

  • a named human approver for each letter
  • a short review checklist focused on factual support and customer impact
  • a decision log linking prompt/output, edits made, and approval rationale

This aligns with Human-Centricity (humans retain agency and accountability) and meets the auditability constraint while allowing controlled continuation under time pressure.

It restores human judgment with enforceable HITL decision rights and auditable evidence before any customer-impacting release.


Question 5

Topic: Principles for Responsible and Trustworthy AI

In the AIPGF, what term best describes the expectation that an AI-assisted recommendation is delivered with understandable rationale and supporting evidence (not just the output text), so a human reviewer can assess and challenge it?

  • A. Auditability
  • B. Adaptability
  • C. Data provenance
  • D. Explainability

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: This expectation is captured by explainability within the Transparency principle. It focuses on providing human-understandable reasons and supporting evidence for an AI-assisted recommendation so reviewers can evaluate quality, appropriateness, and limitations. This is more than simply presenting an answer or a score.

Under the AIPGF Transparency principle, explainability is the capability to communicate why an AI-assisted recommendation was produced, using rationale and supporting evidence that a human can understand and interrogate. In practice, this means the recommendation is accompanied by key drivers/assumptions, relevant source references, and any material limitations so decision-makers can judge whether to accept, adjust, or reject it. This differs from mechanisms that mainly support after-the-fact inspection (logs) or the origin of data (lineage). The governance intent is to reduce “black box” decision-making and enable effective Human-in-the-Loop oversight.

Explainability requires AI-assisted recommendations to include understandable reasons and evidence so humans can validate or contest them.


Question 6

Topic: Principles for Responsible and Trustworthy AI

A project team is rolling out a GenAI assistant to help service agents draft responses to customers. To ensure responsible use, the team designs the workflow so agents can override or ignore AI suggestions, must review outputs before sending, and can easily escalate unusual cases to a human specialist. Which AIPGF principle is this practice most directly demonstrating?

  • A. Automation-first efficiency
  • B. Human-Centricity
  • C. Transparency
  • D. Adaptability

Best answer: B

What this tests: Principles for Responsible and Trustworthy AI

Explanation: This workflow is designed around human control of AI assistance: people can challenge, override, and take responsibility for what is ultimately sent to customers. That is the essence of Human-Centricity in AIPGF—AI supports human decision-making rather than replacing it, especially where customer impact exists.

Human-Centricity in AIPGF means designing and governing AI assistance around people’s needs, agency, safety, and accountability. In practice, this shows up as keeping meaningful human control over AI-assisted outputs (HITL), ensuring users can understand and challenge suggestions, and providing escalation paths for edge cases or potential harm. In the scenario, requiring human review before sending, enabling override/ignore, and supporting escalation are all concrete controls that make the AI assistant subordinate to human judgment and responsibility. Transparency and Adaptability may also be relevant in a well-governed project, but they do not best describe the core intent of keeping humans in control of AI-assisted interactions.

It prioritizes human agency and well-being by keeping people in control of AI-assisted decisions through HITL review and override.


Question 7

Topic: Principles for Responsible and Trustworthy AI

A project team plans to use GenAI to screen job applicants by generating candidate summaries and a recommended ranking for hiring managers. To meet a 6‑week deadline, they propose: no disclosure to applicants, managers can accept rankings without review, and no post-launch monitoring because the model will be “good enough.”

Which evaluation response best applies the AIPGF principles of Human-Centricity, Transparency, and Adaptability together?

  • A. Approve if managers complete a one-time AI ethics training
  • B. Keep the design but schedule an annual model refresh cycle
  • C. Add applicant disclosure only; keep automated ranking for speed
  • D. Require HITL decisions, disclosure, and monitored review triggers

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: The best evaluation is the one that simultaneously protects people impacted by the AI, makes AI use understandable to stakeholders, and plans for change over time. In this scenario, that means keeping hiring decisions with humans (HITL), disclosing AI use and limitations to applicants, and establishing monitoring with clear review/rollback triggers.

Using AIPGF principles together means checking the proposal for all three dimensions, not treating them as optional add-ons. Here, the proposed “accept rankings without review” undermines Human-Centricity because it shifts decision responsibility away from accountable humans in a high-impact context. “No disclosure” undermines Transparency because affected stakeholders cannot understand or challenge AI involvement. “No monitoring” conflicts with Adaptability because performance, bias, and drift can emerge after deployment.

A principle-aligned evaluation would require:

  • Clear HITL decision rights and escalation for contested cases
  • Appropriate disclosure and traceable rationale for AI-assisted outputs
  • Ongoing monitoring with defined thresholds, review cadence, and rollback/remediation paths

A single annual refresh is too slow and reactive for adaptive governance in a sensitive use case.

It preserves human accountability, makes AI use and limits visible, and adapts controls through ongoing monitoring and review.


Question 8

Topic: Principles for Responsible and Trustworthy AI

Halfway through an HR transformation project, the team initially used a GenAI tool only to draft internal meeting notes under a documented AI Assistance Plan and HITL review.

Now they add a new GenAI plugin that drafts responses to employee queries and auto-populates the case-management system; supervisors are encouraged (but not required) to review before sending. The project manager decides not to update the AI Assistance Plan, risk log, or decision rights because “it’s the same project and we need speed.”

What is the most likely near-term impact of this governance decision?

  • A. Immediate regulatory fines due to using a new GenAI plugin
  • B. Faster delivery will reliably increase benefits realization within weeks
  • C. Model drift will accumulate, reducing answer accuracy over quarters
  • D. Lower auditability and unclear accountability for AI-assisted responses

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Under the Adaptability principle, governance must be re-tailored when AI usage changes in tool, task, or exposure. Moving from internal drafting to outward-facing case responses changes risk and accountability needs. Not updating the AI Assistance Plan, logs, and decision rights most immediately weakens traceability and ownership for AI-assisted outputs.

Adaptability means governance evolves as AI use evolves. In the scenario, the AI use shifts in a way that increases exposure: a new tool is introduced, outputs move closer to “being sent” (operational use), and HITL review becomes optional. If the team does not update governance artifacts and decision rights, the organization quickly loses the ability to show who approved the new use, what controls were required, and how outputs were reviewed.

Near-term, this typically shows up as an auditability and accountability gap:

  • No updated AI Assistance Plan describing the new use and controls
  • No refreshed risk/issue entries reflecting the changed exposure
  • No clear decision rights for who can approve/override AI outputs

The key takeaway is that expanding AI scope without re-tailoring governance immediately reduces trust and defensibility, even before longer-term performance issues appear.

Expanding AI use to customer-like communications without updating controls and decision rights quickly creates gaps in evidence and ownership for what was sent and why.


Question 9

Topic: Principles for Responsible and Trustworthy AI

A retail bank is deploying a GenAI assistant that drafts credit-limit change recommendations for relationship managers. The use case is classified internally as “high-impact” because it can materially affect customer outcomes and is subject to strict model risk management. To meet a launch date, the project manager keeps the same light governance used for low-risk pilots (no independent review, minimal evidence pack, and no tightened go/no-go criteria).

What is the most likely near-term impact of this decision?

  • A. The go/no-go gate is likely to pause launch due to insufficient assurance evidence and lack of independent review
  • B. Value realization will accelerate because reduced governance overhead increases adoption and shortens cycle time
  • C. Regulators will impose financial penalties once discriminatory outcomes are proven after months of live operation
  • D. Customer trust will gradually decline over the next year as users become dissatisfied with recommendation quality

Best answer: A

What this tests: Principles for Responsible and Trustworthy AI

Explanation: In an adaptable governance approach, high-impact or regulated use cases require increased rigor: more evidence, stricter gates, and independent review. Keeping “pilot-level” controls creates an immediate assurance gap. The most likely near-term consequence is a blocked or delayed approval because decision-makers cannot demonstrate auditability and compliance readiness for a high-impact deployment.

This tests the Adaptability principle: governance must be tailored to impact and regulatory exposure. When a use case is classified as high-impact, the organization typically expects additional artifacts (e.g., updated AI Assistance Plan, decision and risk logs, testing/validation results) and an independent review before go-live. If the project deliberately keeps lightweight controls, the first impact is usually not a future fine or slow trust erosion; it is an immediate inability to pass the release gate or satisfy an internal risk/audit checkpoint because required assurance evidence is missing. The key takeaway is that tailoring up governance creates near-term friction by design, trading speed for demonstrable control in high-impact contexts.

High-impact contexts require stricter gates and evidence, so missing independent review and artifacts most immediately blocks approval and auditability.


Question 10

Topic: Principles for Responsible and Trustworthy AI

A retail bank is piloting an AI-assisted loan approval workflow. The model’s recommendation can approve/decline customers and may materially affect access to credit (a high-impact decision). Business leaders want “straight-through processing” to cut turnaround time, and Internal Audit requires clear decision accountability and an appeal route.

Which approach should the project team AVOID to meet AIPGF human-centricity expectations?

  • A. Provide customers an appeal path with human reassessment
  • B. Log human overrides with rationale and decision accountability
  • C. Require human approval for declines and borderline approvals
  • D. Auto-decline applications below a threshold without review

Best answer: D

What this tests: Principles for Responsible and Trustworthy AI

Explanation: Human-centricity requires meaningful human involvement when AI outputs drive consequential outcomes like credit approval or denial. In this scenario, the governance expectation is that people retain decision rights for high-impact approvals/declines, with traceable accountability and a way for affected individuals to challenge outcomes. Removing human review for declines is the clearest anti-pattern under these constraints.

In AIPGF, Human-Centricity means prioritizing human agency, accountability, and contestability when AI affects people’s rights or opportunities. Credit decisions are high-impact, so governance should ensure a Human-in-the-Loop (or at least human-on-the-loop with clear escalation) for adverse or borderline outcomes, plus auditable decision records.

Practically, this means:

  • Keep a human decision-maker accountable for approvals/declines that materially affect customers.
  • Maintain decision logs and override rationales to support assurance and auditability.
  • Provide an appeal/recourse mechanism with human reassessment.

The anti-pattern is using automation to issue adverse outcomes without human review, because it removes meaningful human agency and makes errors harder to detect and remediate.

Fully automating high-impact declines without human oversight or recourse violates human-centricity for consequential decisions.

Continue with full practice

Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Practitioner guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026