AIPGF Foundation: Roles, Responsibilities, and Accountabilities

Try 10 focused AIPGF Foundation questions on Roles, Responsibilities, and Accountabilities, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Foundation
Topic areaRoles, Responsibilities, and Accountabilities
Blueprint weight13%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Roles, Responsibilities, and Accountabilities for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Roles, Responsibilities, and Accountabilities

A project team wants to use an external AI vendor to summarise service-desk tickets to help draft release notes. The tickets include personal data and commercially sensitive details. The team proposes to share a redacted dataset, restrict vendor access to only the minimum fields and a time-limited workspace, and obtain a recorded approval before any transfer.

In the AIPGF context, which role responsibility is primarily being applied here?

  • A. Information/data owner approves third-party access on least-privilege basis
  • B. AI assurance lead approves access after completing model testing
  • C. Project manager approves data sharing to meet delivery timelines
  • D. Procurement lead approves access once the contract is signed

Best answer: A

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: Least privilege plus explicit, recorded approvals is a governance control for third-party data sharing. In AIPGF terms, the key accountability is the role that owns the information asset and has the decision right to authorise access and conditions of use. This ensures data minimisation, appropriate access scope, and auditable consent before transfer to a vendor.

When external parties support AI-assisted project work, AIPGF expects clear decision rights for data sharing and access. Applying least privilege means limiting the shared dataset and access permissions to what is strictly necessary (e.g., redaction, minimal fields, time-bound access, segregated workspace). “Clear approvals” means the accountable owner of the information asset authorises the sharing conditions and that the decision is recorded for auditability and ongoing assurance. This is distinct from delivery management, contracting activity, or technical assurance: those roles may contribute controls or evidence, but they do not typically hold the primary approval authority for releasing data to a third party. Key takeaway: data release approvals should sit with the accountable data/information owner, supported by security/privacy and project governance as needed.

Approving external data sharing and ensuring least privilege is a decision right typically held by the accountable information/data owner.


Question 2

Topic: Roles, Responsibilities, and Accountabilities

A programme team is using a generative AI tool to draft business cases and stakeholder briefings. Over the last month, several issues have emerged: no consistent Human-in-the-Loop (HITL) sign-off before documents are shared, prompts and outputs are not logged, one workstream started using an unapproved AI tool, stakeholders report declining trust in the outputs, and the expected time-saving benefits are not being realised.

Which is the most likely underlying cause of this AI governance breakdown?

  • A. Stakeholder engagement communications have been insufficient
  • B. Decision rights and assurance roles for AI use are not clearly defined at governance gates
  • C. The team failed to document prompts and outputs in a consistent format
  • D. The AI tool is technically incapable of producing reliable content for this domain

Best answer: B

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The pattern of issues points to unclear accountability: nobody is explicitly deciding, advising, and assuring AI-assisted outputs at the right governance points. When decision rights for tool use, HITL approval, and required evidence (e.g., prompt/output logs) are ambiguous, teams bypass controls, assurance is missing, and trust and benefits deteriorate.

This scenario shows multiple control failures (unapproved tool use, missing HITL sign-off, no audit trail, and stakeholder distrust) happening at once, which most commonly indicates a governance design problem rather than a single execution lapse. In AIPGF terms, decision rights and assurance responsibilities should be explicit at governance gates so people know who:

  • decides whether an AI tool/use case is permitted
  • advises on safe use (risk, data handling, transparency)
  • assures that evidence exists (logs, reviews, approvals) before release

When this RACI/decision-rights clarity is missing, teams default to local practices, controls become optional, and benefits tracking becomes unreliable because AI use is inconsistent and unmanaged. The key takeaway is to resolve role ambiguity so approvals and assurance evidence are consistently required and enforced.

Without clear RACI/decision rights for tool approval, HITL sign-off, and assurance evidence, teams act inconsistently and controls are not enforced.


Question 3

Topic: Roles, Responsibilities, and Accountabilities

In an AI-assisted programme, a generative AI tool will be used to draft stakeholder communications and sections of the business case. You are creating a RACI for the AIPGF Foundation gate activity: “Agree and sign off the AI Assistance Plan” (including intended uses, data constraints, and the HITL review/approval approach).

Which RACI assignment is MOST appropriate?

Roles: Senior Responsible Owner (SRO), Programme Manager (PgM), AI Lead, Data Protection Officer (DPO), Independent Assurance, Delivery Team.

  • A. A AI Lead; R PgM; C SRO, DPO; I Assurance, Team
  • B. A DPO; R AI Lead; C SRO, Assurance; I PgM, Team
  • C. A SRO; R PgM; C AI Lead, DPO, Assurance; I Team
  • D. A PgM; R AI Lead; C SRO; I DPO, Assurance, Team

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: In AIPGF, accountability for approving AI-assisted ways of working at the Foundation gate should sit with the business owner who owns outcomes and risk acceptance (typically the SRO). The programme manager is usually responsible for producing and coordinating the AI Assistance Plan, while AI, privacy, and assurance roles provide expert input and challenge.

A responsibility matrix in AIPGF should reflect decision rights at governance gates: who can accept the residual risk and authorize AI use for project deliverables. For the Foundation gate activity of agreeing and signing off the AI Assistance Plan, the accountable role is typically the SRO (or equivalent business owner) because they own benefits, stakeholder trust, and risk acceptance. The programme manager is responsible for drafting, consolidating evidence, and routing the plan through the gate. Subject-matter roles (AI Lead for feasibility/controls, DPO for privacy/data handling, and Independent Assurance for objective challenge) are consulted to strengthen transparency and assurance. The delivery team is informed so they apply the agreed HITL review and usage constraints consistently.

The SRO holds decision rights for approving AI use, while the PgM prepares the plan and specialists/assurance advise.


Question 4

Topic: Roles, Responsibilities, and Accountabilities

You are asked to build a RACI for AI governance activities across the AIPGF stages (Foundation, Activation, Evaluation) for a programme using generative AI to draft stakeholder communications and status reports.

A draft list of activities exists (e.g., approve the AI Assistance Plan; approve AI-assisted deliverables after HITL review; approve monitoring and incident response actions), but the roles are unclear across PMO, Product Owner, Data Protection Officer, and an independent assurance lead.

What should you clarify FIRST to create a valid RACI?

  • A. Who is accountable to approve each AI governance gate outcome
  • B. How many staff need AI training this quarter
  • C. Which AI tool the team prefers to use
  • D. The programme’s overall budget contingency percentage

Best answer: A

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: To build a RACI across AIPGF stages, you first need unambiguous decision authority for the governance outputs and gates (who is Accountable for approvals). Once accountability is known, you can assign who is Responsible for producing evidence/artifacts and who must be Consulted or Informed for each activity.

A RACI is a responsibility matrix, but it only works when decision rights are clear. In AIPGF governance, the most critical clarifier is who is Accountable for approving the stage-gate outcomes and AI-assisted outputs (e.g., the AI Assistance Plan in Foundation, HITL sign-off of AI-assisted deliverables in Activation, and monitoring/incident actions in Evaluation). With the Accountable role defined per activity, you can then allocate:

  • Responsible: who prepares the artifact/evidence
  • Consulted: who provides input (e.g., privacy, security, assurance)
  • Informed: who needs visibility

Without gate-level accountability, the RACI will be internally inconsistent and approvals will be disputed later.

A RACI must be anchored on clear decision rights at each stage gate so accountability can be assigned before responsibilities and consultations.


Question 5

Topic: Roles, Responsibilities, and Accountabilities

You are in the Activation stage of a programme introducing an AI assistant to draft weekly status reports and action logs for multiple projects. A stakeholder resistance map has been created to anticipate objections and plan mitigations.

Exhibit: Resistance map (excerpt)

Stakeholder: Compliance lead
Resistance: High
Objection: Unclear data handling and audit trail

Stakeholder: PM community lead
Resistance: Medium
Objection: Fear of inaccurate (hallucinated) reporting

Stakeholder: Works council rep
Resistance: Medium
Objection: Job impact and deskilling concerns

Stakeholder: Sponsor
Resistance: Low
Objection: Wants benefits proven quickly

Which action is NOT an appropriate use of this resistance mapping?

  • A. Exclude the compliance lead from approvals during the pilot
  • B. Run a HITL walkthrough showing review and traceability steps
  • C. Add engagement actions and owners into the communications plan
  • D. Update the AI Assistance Plan with stakeholder-specific controls

Best answer: A

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: Stakeholder resistance mapping is used to surface likely objections and convert them into practical mitigation actions with clear owners. In AIPGF, that typically means strengthening transparency, HITL review, and decision rights—not working around them. Bypassing an accountable approver undermines trust and assurance, especially for data handling and auditability concerns.

Resistance mapping helps you anticipate objections (e.g., privacy, auditability, accuracy, job impact) and plan proportionate mitigations that increase confidence while keeping governance intact. In an AI-assisted reporting context, common mitigations include clarifying decision rights, documenting how the tool is used, and demonstrating HITL checks so stakeholders can see how errors and sensitive data are controlled.

Good resistance-mitigation actions typically:

  • assign engagement activities and owners
  • demonstrate transparency and review/audit steps
  • update governance artifacts (e.g., AI Assistance Plan, decision log) to reflect agreed controls

The anti-pattern is using “resistance” as a reason to bypass accountable approval or assurance roles, which increases risk and damages stakeholder trust.

Resistance mapping should drive targeted engagement and controls, not bypass accountable approvers for AI-assisted outputs.


Question 6

Topic: Roles, Responsibilities, and Accountabilities

A project team is using a generative AI tool to draft requirements and stakeholder communications. During an internal review, the steering group finds that some outputs were produced using unapproved personal accounts, prompts and sources were not recorded, and several documents were issued without any named Human-in-the-Loop (HITL) reviewer signing off. Stakeholders now question whether the deliverables can be trusted, and the expected time-saving benefits have not materialised.

What is the most likely underlying cause of this AI governance breakdown?

  • A. Prompts and sources were not documented in a central repository
  • B. Stakeholders were not engaged early enough about AI use
  • C. AI roles, decision rights, and HITL accountabilities were not defined and agreed
  • D. The AI tool’s output quality is too inconsistent for project use

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The symptoms point to missing ownership and accountability for governing AI-assisted work. In AIPGF, named roles and decision rights (including who provides HITL approval) enable controls such as approved-tool use, traceable prompts/sources, and release gates for AI-assisted deliverables. Without that role clarity, governance activities become optional and benefits and trust deteriorate.

This scenario shows multiple control failures at once (unapproved tool use, no prompt/source traceability, and no HITL sign-off), which most strongly indicates a roles-and-accountabilities gap rather than a single process slip. Under AIPGF, governing AI use requires explicit assignment of who is accountable for AI usage decisions and who must review/approve AI-assisted outputs (HITL), alongside responsibilities for documenting prompts/sources and enforcing approved tools. When decision rights and approvers are not defined and accepted (for example, missing/weak RACI for AI-assisted deliverables and no clear owner for the AI Assistance Plan), teams will use convenient tools, skip documentation, and publish outputs without proper assurance—leading to stakeholder distrust and unrealised benefits. The key fix starts with clarifying roles, responsibilities, and approval gates, then embedding the supporting controls.

Without clear accountable roles and approval rights, tool use, prompt logging, and HITL sign-off are not consistently enforced.


Question 7

Topic: Roles, Responsibilities, and Accountabilities

During Activation, a project team starts using a new generative AI tool to draft requirements and meeting summaries. The project manager enables access by uploading an export from the customer CRM (includes personal data) and does not obtain documented approval from the tool owner/AI governance authority or the data owner (the AI Assistance Plan shows approvals as “TBD”). An internal assurance review is scheduled next week.

What is the most likely near-term impact of this omission?

  • A. Gradual reduction in output quality as the model drifts over time
  • B. Long-term skills erosion as staff become dependent on AI assistance
  • C. Delayed value realisation because benefits tracking cannot be completed
  • D. An immediate assurance/audit finding due to missing authorised approvals

Best answer: D

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: Approving AI tools and approving data access are distinct decision rights that must be explicitly owned and evidenced. Because the team used an unapproved tool and uploaded sensitive CRM data without the data owner’s approval, the project cannot demonstrate compliant use when assurance occurs. The near-term consequence is an auditability gap that damages trust and increases immediate risk exposure.

In AIPGF, the project must clarify who is accountable for (1) approving the AI tool for use in the project context and (2) authorising access and permitted use of specific datasets (especially personal or confidential data). Those approvals should be documented (e.g., in an AI Assistance Plan/decision log) so the project can evidence governance, controls, and accountability.

Here, the omission is not about whether the AI outputs are “good,” but that the project cannot show authorised decision-making for tool selection and data handling. With an assurance review imminent, the most likely near-term impact is an immediate audit/assurance finding (and potential pause or rework) due to missing approvals and weak audit trail, undermining stakeholder trust.

Without documented tool and data-access approvals, the team cannot evidence decision rights and controls, triggering a near-term auditability and trust issue.


Question 8

Topic: Roles, Responsibilities, and Accountabilities

A programme team is using a generative AI assistant to draft stakeholder emails, meeting minutes, and a first cut of the Business Case summary. At the Activation governance gate, the sponsor asks for evidence that there will be no accountability gaps if an AI-assisted output is wrong or misleading.

Which artifact/evidence would BEST validate that AI-assisted outputs are linked to a named human owner and approver?

  • A. A risk register entry stating “AI hallucinations possible” with a generic mitigation
  • B. An approved AI Assistance Plan listing each AI-assisted output with named owner and named approver
  • C. An AIPG-CMM assessment score showing the organisation’s AI maturity level
  • D. A Benefits Tracker showing time saved from using the AI assistant

Best answer: B

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: To prevent accountability gaps, governance must explicitly tie each AI-assisted output to a specific person who owns it and a specific person who approves it at the relevant gate. An approved AI Assistance Plan that names the owner and approver for each AI-assisted deliverable provides clear, auditable decision rights and accountability for AI-assisted content.

The control being validated is accountability: AI-assisted outputs must have a named human owner (responsible for quality and fitness for purpose) and a named approver (accountable decision right to accept/release the output). The most convincing evidence is an artifact that explicitly maps AI-assisted deliverables to those named individuals and is approved at the governance gate.

An AI Assistance Plan is designed to document how AI will be used on the project/programme, including what outputs are AI-assisted and what HITL checks and approvals apply. When it includes named owners and approvers per output and is formally approved, it closes the “who is accountable?” gap that can arise with non-deterministic AI tools.

Measures of value, generic risks, or maturity ratings may be useful, but they do not, by themselves, demonstrate assigned accountability for specific AI-assisted outputs.

This directly assigns human ownership and approval decision rights for each AI-assisted deliverable, preventing accountability gaps.


Question 9

Topic: Roles, Responsibilities, and Accountabilities

A programme team engages an external vendor’s generative AI tool to draft requirements and a benefits case from internal workshop notes. To “move fast,” a shared service account is created with access to the full project SharePoint library, and analysts are told they can upload any files needed.

Within weeks, the programme reports these symptoms:

  • No documented HITL approval for AI-produced deliverables
  • Prompts and data uploads are not recorded
  • A business sponsor challenges whether confidential material was exposed, reducing trust
  • Some staff used the vendor tool directly outside the agreed onboarding route
  • Expected time savings are not being realised due to rework and escalations

Which is the most likely underlying cause of this AI governance breakdown?

  • A. Stakeholders were insufficiently trained to trust AI outputs
  • B. HITL sign-off was skipped on AI-assisted deliverables
  • C. Third-party data access was not governed with least privilege and clear approvals
  • D. Prompt documentation was not maintained in an agreed repository

Best answer: C

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: The pattern points to weak third-party data-sharing governance: broad vendor access via a shared account and no explicit approval route for what data can be shared. That breaks least-privilege and auditability expectations, which then drives unapproved tool use, missing evidence for HITL controls, and loss of stakeholder trust. Fixing prompts or training alone would not address the underlying access/approval gap.

In AIPGF, engaging external parties requires explicit decision rights and controls for what data leaves the organisation, who can share it, and how access is granted and monitored. Here, a shared service account with full repository access and “upload anything” guidance indicates least-privilege and approval steps were not defined or enforced (typically in the AI Assistance Plan and procurement/vendor onboarding).

When third-party data access is not properly governed, downstream symptoms commonly appear:

  • people bypass the intended onboarding route
  • evidence is missing (uploads, prompts, approvals)
  • HITL controls become inconsistent because ownership is unclear
  • stakeholder trust drops due to uncertainty about confidentiality

The root issue is therefore the missing least-privilege, approved data-sharing model for the vendor relationship, not the individual documentation or review lapses.

Without defined decision rights and approval controls for external sharing, ad hoc broad access led to unapproved use, poor auditability, and stakeholder distrust.


Question 10

Topic: Roles, Responsibilities, and Accountabilities

In the APMG AI Project Governance Framework (AIPGF), what best describes an AI Assistance Plan?

  • A. A library of approved prompts and response templates to standardize team communications
  • B. A governance artifact that defines intended AI use, limits, HITL checks, and who approves AI-assisted outputs
  • C. A risk register focused only on AI threats, with owners and mitigations
  • D. Vendor model documentation describing training data, architecture, and performance metrics

Best answer: B

What this tests: Roles, Responsibilities, and Accountabilities

Explanation: An AI Assistance Plan is used to align stakeholders on how AI will be used in the project, what it can and cannot be relied upon to do, and where human accountability remains. It makes responsibilities and approval decision rights explicit, typically by specifying Human-in-the-Loop checkpoints and required evidence for sign-off.

In AIPGF, aligning stakeholder expectations requires an explicit agreement on AI capability, limitations, and responsibility boundaries so that AI-assisted outputs are trusted and appropriately assured. The AI Assistance Plan is the core artifact for this: it documents how AI will support the project and the governance controls around that support.

It typically clarifies:

  • intended use cases and prohibited uses
  • limitations (e.g., accuracy, bias, hallucination risks)
  • HITL review/approval points for AI-assisted deliverables
  • accountable roles and decision rights for sign-off

This goes beyond tool configuration or vendor documentation by making project-specific accountability and assurance expectations explicit.

It is the primary plan used to set expectations about AI capability/limitations and clarify accountability and approval boundaries for AI-assisted deliverables.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026