Try 10 focused AIPGF Practitioner questions on Roles, Responsibilities, and Accountabilities, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | AIPGF Practitioner |
| Topic area | Roles, Responsibilities, and Accountabilities |
| Blueprint weight | 13% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Roles, Responsibilities, and Accountabilities for AIPGF Practitioner. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Roles, Responsibilities, and Accountabilities
A bank is preparing an Activation gate for a GenAI assistant that drafts customer letters. A recent internal review found that the same team members who build prompts also sign off “ready for release,” and the audit team wants evidence that oversight/assurance decision rights are clearly separated from delivery responsibilities for AI-assisted outputs.
Which artifact best validates this control is in place and operating as intended?
Best answer: B
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: To validate separation between oversight/assurance and delivery roles, you need evidence of defined accountabilities and decision rights for AI-assisted work products. A signed RACI is project-specific and shows who does the work versus who independently assures and approves at the gate, supporting auditability of the control.
This situation is testing a governance control: separation of duties between delivery (building prompts, configuring workflows, producing drafts) and oversight/assurance (independent review, challenge, and go/no-go approval for AI-assisted outputs). The strongest validation is evidence that explicitly assigns responsibility and approval authority and shows it was agreed and adopted for the project.
A RACI (or equivalent accountability matrix) provides that evidence because it documents who is Responsible for delivery tasks, who is Accountable for decisions, and which assurance stakeholders must be Consulted/Informed, and a sign-off demonstrates the control is implemented for the release gate. A benefits tracker, maturity assessment, or risk register may be useful governance inputs, but they do not directly evidence decision-right separation for this project.
A signed RACI evidences clear, project-specific separation of delivery responsibilities from independent oversight/assurance approvals.
Topic: Roles, Responsibilities, and Accountabilities
A public-sector team is in the Activation stage piloting a GenAI assistant that drafts responses for a benefits contact center. The project is rated high risk (Tier 3) due to potential harm from incorrect guidance, and internal audit requires clear traceability of AI-assisted decisions and changes. Operations leaders want fast iteration, but the union and service owners want transparency about how human review (HITL) will work.
Which communication approach best fits the risk tier and stakeholder needs while staying efficient?
Best answer: D
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: For a Tier 3 pilot, communications should be more frequent and assurance-focused, with clear visibility of risks, HITL effectiveness, model changes, and decision traceability. Different stakeholders need different levels of detail: operational leaders need frequent metrics, while governance and workforce stakeholders need regular, structured oversight. Time-bound incident communication supports transparency and fast containment without creating unnecessary reporting overhead.
The core concept is risk-tiered stakeholder communications: higher-risk AI projects require more frequent, structured governance communications and stronger transparency to affected stakeholders, while still tailoring detail to avoid overload. In this Tier 3 Activation pilot, internal audit needs evidence of traceability (decisions and changes), and the union/service owner needs confidence in human oversight and how staff/users are impacted.
A balanced plan typically:
The key takeaway is to match content and frequency to both risk tier and stakeholder decision needs, not to broadcast everything to everyone or to hide assurance signals.
It tailors content and frequency by stakeholder role while meeting Tier 3 transparency, assurance, and timely-incident expectations without flooding everyone with operational detail.
Topic: Roles, Responsibilities, and Accountabilities
A retail bank wants to deploy a GenAI assistant to draft responses for customer service agents. The draft AI Assistance Plan is complete, and the product owner wants to start an Activation-stage pilot next week.
The pilot would use the last 12 months of chat transcripts, which contain customer PII and occasional health-related disclosures. The GenAI tool is a third-party cloud service, and the bank’s risk team has classified the use case as high impact.
What is the best next step in the AIPGF flow?
Best answer: D
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: Because the pilot uses PII (including potentially sensitive disclosures) and a third-party cloud GenAI tool, specialist roles must be involved before moving from Foundation into Activation. Privacy/data protection, security, and legal/compliance need to validate lawful use, controls, and supplier terms, and to update/govern approval of the AI Assistance Plan prior to any data exposure.
In AIPGF, specialist involvement is triggered by context: high-impact classification, use of personal (especially sensitive) data, and third-party processing all raise governance and assurance requirements. Before entering Activation (tool configuration and pilot execution), you need specialist input to confirm data protection impacts, security controls, and contractual/compliance constraints so the AI Assistance Plan is operational and approvable.
A practical sequence is:
Internal audit can be engaged to shape evidence expectations, but a full audit is typically not the immediate next step before basic specialist risk controls are defined and agreed.
High-impact use of sensitive personal data with a third-party tool requires specialist assessments and approvals before Activation.
Topic: Roles, Responsibilities, and Accountabilities
You are the project manager for an AI-assisted programme in the Activation stage. A GenAI tool will draft credit-decision narratives for relationship managers to review before submitting decisions. Constraints: the use case is high-risk, HITL approval is mandatory, an internal audit in 6 weeks requires clear accountability and audit trails, and the tool cannot retain prompts/outputs beyond the user session. Go-live is in 2 weeks.
What is the BEST next governance action?
Best answer: C
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: Before a high-risk, HITL-required GenAI capability goes live, AIPGF expects clear decision rights and accountabilities for both AI usage and the human decision. Creating and approving a RACI/decision-rights map ensures named owners for the AI-assisted deliverable, the final credit decision, evidence capture, and escalation—supporting auditability despite tool retention limits.
The core governance need here is role clarity: who is accountable for using AI, who is accountable for the final business decision under HITL, and who provides oversight/assurance. In Activation, the best next action is to formalise and approve decision rights and responsibilities (typically as a RACI) so the project can operate with clear approvals, escalation paths, and auditable evidence—especially important in a high-risk use case with an imminent audit and limited tool retention.
A fit-for-purpose RACI/decision-rights map should explicitly assign:
This enables compliant operation without relying on the tool to store prompts/outputs, because evidence capture and oversight responsibilities are assigned up front.
This establishes accountable owners (including HITL decision accountability and independent assurance) and enables auditable governance before go-live.
Topic: Roles, Responsibilities, and Accountabilities
In the APMG AI Project Governance Framework (AIPGF), which term best describes the artifact used to align stakeholders on where AI will (and will not) be used, its limitations, and the human responsibility boundaries for AI-assisted outputs and decisions?
Best answer: B
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The AI Assistance Plan is the AIPGF artifact specifically intended to make AI usage explicit and governable for stakeholders. It clarifies intended uses and limitations, plus who is accountable for reviewing, approving, and owning outcomes when AI is involved. This directly manages expectations and responsibility boundaries.
In AIPGF, aligning stakeholders on AI capability, limitations, and responsibility boundaries requires an explicit, agreed artifact that defines how AI will be used in the project and how humans retain accountability. The AI Assistance Plan serves this purpose by making the AI “contract” visible and testable: what tasks AI supports, what it must not be used for, required human review (including HITL where appropriate), escalation/approval points, and how AI-assisted outputs are labeled and logged for transparency and auditability.
When expectations shift (e.g., pressure to automate decisions), the AI Assistance Plan is the baseline used to evaluate whether the proposed change is acceptable, needs additional controls, or must go through formal governance gates. The key takeaway is that this plan sets clear boundaries so stakeholders do not over-trust AI or misassign accountability.
It documents the intended AI use, constraints, human accountabilities, and approval/oversight expectations to set clear stakeholder boundaries.
Topic: Roles, Responsibilities, and Accountabilities
A bank is piloting a GenAI assistant to draft responses for customer complaints. The project’s governance approach calls for an independent assurance role to review the AI Assistance Plan and sign off on readiness evidence at the Evaluation gate.
To meet a tight launch date, the project sponsor decides that the delivery lead (who built the prompt library and selected training data) will also act as the “assurance approver” and will sign the gate decision based on the team’s own testing notes.
What is the most likely near-term impact of this decision?
Best answer: A
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: Assurance/oversight roles provide independent challenge and defensible evidence for go/no-go decisions. When delivery roles self-approve their own AI readiness, the project creates a conflict of interest and blurs accountability. The immediate consequence is weaker audit trails and reduced stakeholder confidence in the gate outcome.
The core concept is separating delivery responsibilities (building/configuring and producing evidence) from oversight/assurance responsibilities (independently reviewing, challenging, and approving). In the scenario, the same person who designed the GenAI approach also signs the Evaluation gate, based on internal notes. That undermines independence and makes it difficult to demonstrate objective review, clear decision rights, and reliable governance evidence.
Near-term effects typically show up as:
A similar schedule-driven shortcut might speed delivery, but the governance impact most immediately felt is loss of auditability and trust in the assurance process.
Having the delivery team self-approve removes independent oversight, making gate decisions harder to defend and evidence in audit.
Topic: Roles, Responsibilities, and Accountabilities
In the APMG AI Project Governance Framework (AIPGF), what is the term for the project artifact that documents where AI/GenAI will be used, who is accountable for AI-assisted outputs, which AI usage decisions require stakeholder review/approval, and how changes to AI usage will be communicated?
Best answer: A
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: The AIPGF uses an AI Assistance Plan to make AI usage explicit and governable. It clarifies who does what (including HITL accountabilities), what decisions need stakeholder input or approval, and how AI usage decisions and subsequent changes are communicated and evidenced. This directly supports planning stakeholder engagement meetings and communication strategies around AI use.
The core concept is having a single, agreed artifact that turns “we might use AI” into controlled, auditable project practice. In AIPGF, the AI Assistance Plan captures the scope of AI/GenAI assistance, the decision rights for AI usage decisions (including escalation), and the stakeholder engagement and communications needed when AI use is introduced, changed, or withdrawn. It supports Human-Centricity and Transparency by setting clear expectations for impacted stakeholders and by making approvals and accountability explicit, typically alongside HITL arrangements. A general project RACI or a generic decision log can support governance, but they do not, by themselves, define and control AI usage decisions and change communications end-to-end.
It defines intended AI use, decision rights, accountabilities, and the engagement/communications needed to govern AI usage changes.
Topic: Roles, Responsibilities, and Accountabilities
In an AIPGF-governed AI project, which term describes the mechanism used to ensure decisions are made at the appropriate governance level (project, programme, or portfolio) and to define escalation when authority is exceeded?
Best answer: B
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: A decision rights matrix is used to assign decision authority to the correct governance level (project, programme, or portfolio) and to make escalation explicit. This directly supports checking whether governance decisions are being taken at the right level and prevents decision-making drifting to the wrong forum.
The core concept is decision rights: specifying who has the authority to decide (and at what level) for defined decision types, with clear escalation routes when thresholds are exceeded. In AIPGF, this supports effective governance gates by ensuring that operational decisions stay within the project’s mandate, while cross-project trade-offs, risk appetite changes, or strategic value decisions are elevated to programme/portfolio governance as appropriate. A decision rights matrix typically complements (but is not the same as) role accountability tools by focusing on decision authority and escalation rather than activity ownership. The key takeaway is that “right level” governance is achieved by explicitly mapping decision types to decision-makers and levels.
It allocates decision authority by governance level and defines escalation paths for out-of-scope decisions.
Topic: Roles, Responsibilities, and Accountabilities
A healthcare claims team is in the Activation stage building a GenAI assistant to summarize adjuster notes. The AI Assistance Plan lists allowed use cases, but it does not assign who is accountable for approving (1) new AI tools and (2) access to the claims dataset. Under delivery pressure, the project manager tells squad leads to “self-approve” any tool and data access they need.
What is the most likely near-term impact of this omission on governance outcomes?
Best answer: C
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: Failing to define accountable roles for approving AI tools and data access creates an immediate control gap: no consistent decision rights, no reliable evidence of who approved what, and no defensible access pathway. In a healthcare context, that quickly translates into weakened trust and audit readiness, and increased exposure to policy and privacy non-compliance.
The core governance concept is assigning clear accountabilities and decision rights for AI tool approval and data access approval. When squads “self-approve,” approvals are likely to vary by team, bypass required checks (e.g., data owner and security review), and leave incomplete decision records. In the near term, the organization cannot demonstrate controlled access and authorized tool usage, which directly harms auditability and raises compliance and operational risk—especially with sensitive claims data.
Practically, the Activation gate should require:
The immediate consequence is a weak, non-defensible audit trail, not downstream performance issues like drift or bias.
Without defined approvers and decision rights, approvals become inconsistent and poorly evidenced, undermining audit trails and increasing near-term compliance risk.
Topic: Roles, Responsibilities, and Accountabilities
A bank is preparing a governance gate to release a GenAI tool that drafts credit memo summaries for underwriters. The project is high-risk and will be audited. In the last steering meeting, stakeholders disagreed on who can approve go-live, who provides risk advice, and who provides independent assurance of controls.
Which action should the project team NOT take to resolve this role ambiguity?
Best answer: B
What this tests: Roles, Responsibilities, and Accountabilities
Explanation: Role ambiguity is resolved by explicitly separating who decides, who advises, and who assures, then documenting those decision rights at governance gates. High-risk, audit-facing releases particularly need independent assurance to avoid conflicts of interest. Making the same role both approver and assurer undermines independence and auditability.
The core governance move is to make decision rights explicit and separable: one role is accountable to decide (e.g., go/no-go), relevant SMEs advise into that decision, and an independent function assures evidence and controls are adequate. In a high-risk, audit-facing AI release, independence of assurance is essential because it provides objective challenge and defensible traceability.
Practical ways to remove ambiguity include:
Any approach that merges approval and assurance into one person erodes independence and weakens the governance gate.
Combining decision and independent assurance creates a conflict of interest and weakens governance at the gate.
Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the AIPGF Practitioner guide on PMExams.com, then return to PM Mastery for timed practice.