AIPGF Practitioner: Framework Structure and Controls

Try 10 focused AIPGF Practitioner questions on Framework Structure and Controls, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Practitioner
Topic areaFramework Structure and Controls
Blueprint weight13%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Framework Structure and Controls for AIPGF Practitioner. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Framework Structure and Controls

A product team is using a GenAI assistant to draft responses for a customer complaints mailbox. To maintain trust, they add an “AI-used” tag on messages drafted with GenAI, retain prompts/outputs in a decision log for audit, and provide customers with a clear route to request human review of any disputed response.

Which AIPGF element is this practice primarily demonstrating?

  • A. Life cycle stage intent: Evaluation
  • B. Principle: Adaptability
  • C. Principle: Human-Centricity
  • D. Principle: Transparency

Best answer: D

What this tests: Framework Structure and Controls

Explanation: The described practice focuses on making AI assistance explicit to stakeholders and creating traceability through records that support assurance. In AIPGF, that aligns most directly to the Transparency principle because it emphasizes disclosure, explainability-by-evidence (logs), and auditability of AI-assisted work.

This scenario is primarily about visibility and traceability of AI assistance: tagging when GenAI was used, retaining prompts/outputs in a decision log, and enabling an auditable route to human review. In AIPGF, those actions most directly express the Transparency principle, which expects teams to be open about AI use and to keep evidence that supports scrutiny, assurance, and accountability.

Typical Transparency evidence includes:

  • Disclosure of AI involvement to relevant stakeholders
  • Traceable records (e.g., AI decision logs, approvals)
  • Clear escalation/appeal paths for contested outcomes

Human-Centricity is also supported by offering human review, but the dominant intent here is making AI use and rationale auditable and understandable.

It makes AI involvement visible and traceable through disclosure, logging, and an auditable review path.


Question 2

Topic: Framework Structure and Controls

A customer-service team is in the Activation stage of deploying a GenAI agent to draft replies. Internal Audit requires strict, replayable evidence of who approved the agent’s new auto-send capability, why it was approved, and what conditions (monitoring and rollback) were set.

Which action best meets this need using an AIPGF governance artifact?

  • A. Create a decision log entry capturing decision, rationale, approvers, and follow-up actions
  • B. Add the auto-send risk to the risk log with mitigation and residual risk
  • C. Update the benefits tracker to reflect faster response times and expected cost savings
  • D. Document the change in sprint notes and capture stakeholder comments in meeting minutes

Best answer: A

What this tests: Framework Structure and Controls

Explanation: Strict auditability requires a durable, structured record of the governance decision and its decision rights. A decision log captures the decision, rationale, approvals, and any conditions or actions (such as monitoring triggers and rollback ownership) so an auditor can reconstruct what was decided and why.

When strict auditability is the main discriminator, the priority is an evidence trail that reconstructs AI-related governance decisions end-to-end (Transparency). The AIPGF decision log is designed for this: it records what decision was taken (e.g., enabling auto-send), who had decision rights and approved it, the rationale and constraints, and the follow-up actions and owners (e.g., monitoring thresholds, HITL exceptions, rollback plan).

A strong entry typically includes:

  • Decision statement and scope (what changed)
  • Rationale and key evidence considered
  • Approvers/sign-offs and decision date
  • Conditions, actions, owners, and due dates

Other artifacts can reference this decision, but they do not replace the decision log as the primary auditable record.

A decision log provides an auditable record of the AI governance decision, approvals, rationale, and assigned follow-up actions needed for replayability.


Question 3

Topic: Framework Structure and Controls

A retail bank is in the Activation stage for a GenAI “drafting assistant” that proposes customer email text for fee disputes. The bank classifies it as high-risk because outputs could create compliance breaches and may involve customer PII.

Constraints:

  • HITL required: a human case handler must approve every outbound message.
  • Auditability: internal audit will review evidence of tool use, data handling, and approvals.
  • Tool/data limits: only pre-approved enterprise GenAI tools may be used; no copying PII into prompts.
  • Time pressure: pilot must start in 4 weeks.

What is the BEST next governance action?

  • A. Start the pilot and document decisions retrospectively
  • B. Rely on the vendor’s standard security attestation only
  • C. Draft and approve an AI Assistance Plan for the pilot
  • D. Focus on prompt optimization and tone guidelines first

Best answer: C

What this tests: Framework Structure and Controls

Explanation: A high-risk Activation-stage pilot needs a documented, auditable control baseline before use. Creating and getting approval for an AI Assistance Plan is the stage-appropriate action because it explicitly defines objectives and scope, permitted tools, data/PII rules, HITL oversight, and monitoring/logging needed to operate safely under audit and time pressure.

In AIPGF, an AI Assistance Plan is a core governance artifact used to make AI assistance controlled, transparent, and auditable. In the Activation stage—especially for high-risk use and constrained tools/data—the best next step is to define and obtain approval for how the team will use GenAI before the pilot begins. The plan should capture, at minimum:

  • Objectives and in/out of scope use cases
  • Approved tools/environments and prohibited tools
  • Data rules (including PII handling and prompt constraints)
  • Oversight and decision rights (HITL workflow and sign-offs)
  • Monitoring and evidence (logs, quality checks, incident triggers)

This enables consistent execution and provides the evidence internal audit will expect, while supporting Human-Centricity and Transparency through clear accountability and review points.

An AI Assistance Plan establishes objectives, scope, approved tools, data rules, oversight/HITL, and monitoring evidence needed for a high-risk, auditable pilot.


Question 4

Topic: Framework Structure and Controls

A bank is piloting a GenAI assistant to draft replies for customer complaints. Humans will still approve every message before sending, but Internal Audit has set a strict auditability condition for the pilot: the bank must be able to evidence what AI assistance was used, by whom, and why key wording decisions were accepted.

Which AIPGF action/deliverable best addresses this condition and correctly links an AIPGF Principle to a practical governance output?

  • A. Allow rapid prompt changes by decentralising control to each squad
  • B. Tighten data access controls and minimise personal data in prompts
  • C. Implement an AI Assistance Plan and a Decision Log capturing AI use and approvals
  • D. Run customer co-design workshops to validate tone and accessibility needs

Best answer: C

What this tests: Framework Structure and Controls

Explanation: The decisive discriminator is auditability, which is best met by AIPGF Transparency being turned into concrete, reviewable evidence. An AI Assistance Plan sets agreed boundaries and accountabilities for how AI may be used, while a Decision Log provides traceability of material AI-assisted choices and approvals. Together they create the assurance evidence needed to pass a governance gate under strict audit expectations.

This scenario is primarily about the AIPGF Principle of Transparency and how it is expressed through governance work products. When auditability is the constraint, AIPGF expects teams to translate the principle into core behaviours (e.g., documenting and justifying decisions) and activities (capturing traceable evidence during delivery), resulting in deliverables that an independent party can review.

A practical implementation is:

  • Define permitted AI use, constraints, and decision rights in an AI Assistance Plan
  • Record material AI-assisted decisions (who/when/what/why, prompt/model version, human approval) in a Decision Log

Other good practices may still be needed, but they do not directly satisfy the stated audit evidence requirement as effectively as traceable governance deliverables.

This operationalises the Transparency principle via activities that create auditable deliverables (AI Assistance Plan and decision trail) showing who used AI and why outputs were accepted.


Question 5

Topic: Framework Structure and Controls

A project team wants to use a GenAI assistant to help produce customer-facing responses and internal knowledge articles. Internal Audit has said the work must be “traceable,” but no further detail is available yet.

As the AIPGF practitioner, what is the FIRST thing you should clarify to define appropriate traceability controls (prompt logging, source recording, versioning, and decision rationale)?

  • A. What training course schedule the team prefers for prompt engineering
  • B. Which deliverables and decisions will be AI-assisted, and what risk tier/assurance evidence is required for them
  • C. Whether the team can commit to logging every prompt and output for all use cases
  • D. Which GenAI vendor will be used and what its public roadmap is

Best answer: B

What this tests: Framework Structure and Controls

Explanation: Traceability is not “one size fits all”; it must be proportionate to what the AI is influencing and the level of assurance needed. Clarifying which outputs and decisions are AI-assisted, and their risk tier/required evidence, sets the basis for selecting prompt logs, source/provenance capture, versioning, and decision rationale records. Without that, controls may be overbuilt, underbuilt, or mis-scoped.

The core concept is that traceability controls are governance mechanisms that must be defined against a clear scope of AI assistance and the assurance expectations for that scope. In AIPGF terms, you first identify where AI contributes to deliverables and decisions (and who remains accountable), then tailor the evidence needed to demonstrate transparency and auditability.

Practically, clarify:

  • What artifacts are AI-assisted (drafts, final customer responses, approvals)
  • What decisions are influenced by AI and who signs off (HITL decision rights)
  • The risk tier/assurance level expected (what must be reproducible and reviewable)

Once that is known, you can specify proportional controls such as prompt/output logging boundaries, source citation/provenance capture, model/prompt/version identifiers, and decision rationale logs. A “log everything” approach may be infeasible and still not meet the right evidence needs.

Traceability controls must be tailored to the specific AI-assisted outputs/decisions and their risk/assurance needs before choosing what to log, version, and justify.


Question 6

Topic: Framework Structure and Controls

A retail bank is piloting a GenAI assistant that drafts complaint responses for frontline agents. The content will be sent to customers after agent approval (HITL). During a union briefing, employee representatives say they do not trust “AI-written messages” and are concerned agents will be blamed for mistakes they did not write.

The project is about to pass the Activation gate, and the sponsor wants a control/artifact that directly addresses this low stakeholder trust issue by clarifying who is impacted and what they need to know.

Which action best fits this need?

  • A. Renegotiate the vendor contract to add performance SLAs and indemnities for incorrect outputs
  • B. Prioritise privacy risks in the AI risk log and require stronger data minimisation controls
  • C. Define audit evidence requirements and expand decision logging for all AI-assisted outputs
  • D. Create a stakeholder map and insights analysis, then update the engagement plan with tailored transparency messages and accountability expectations

Best answer: D

What this tests: Framework Structure and Controls

Explanation: Low trust is primarily a stakeholder understanding and assurance problem, not a data, audit, or contract problem. The most appropriate governance response is to map who is affected (agents, supervisors, employee reps, customers) and analyse their concerns to define what they need to know. Turning that into a tailored engagement/communications plan increases transparency, sets expectations for HITL accountability, and reduces resistance at Activation.

Stakeholder mapping and stakeholder insights analysis are used to identify impacted stakeholders, understand their concerns, and define what information they need to make informed decisions and maintain trust. In this scenario, the discriminator is low stakeholder trust about AI-authored content and accountability, so the best governance move is to produce (or update) the stakeholder map and capture insights (e.g., fears about blame, workload, quality, and customer perception), then translate them into a concrete engagement and transparency plan.

Practical outputs typically include:

  • Named stakeholder groups and influence/impact assessment
  • Key concerns and “what they need to know” (e.g., when AI is used, limitations, agent accountability, escalation/appeal routes)
  • Agreed messages, channels, timing, and accountable owners

This is more directly responsive to trust and adoption than controls aimed at privacy, auditability, or vendor commercial terms.

It uses stakeholder mapping and insights analysis to identify impacted groups and their information needs, then turns that into targeted, trust-building communications and role clarity.


Question 7

Topic: Framework Structure and Controls

A team is at the Evaluation gate for a GenAI-assisted tool that drafts customer-support email replies. Internal Audit will sample the first month of outputs.

Exhibit: Evaluation gate checklist (excerpt)

Deliverable: AI-assisted email drafts for agents
Accuracy target: TBD
Appropriateness (tone/safety): TBD
Traceability evidence: "Keep prompts in tool" (no decision log)
Compliance: "No PII" (no test evidence)
Gate decision: Pending

Based on the exhibit, what is the best next governance action?

  • A. Approve go-live and refine acceptance criteria after operational monitoring
  • B. Set acceptance criteria only for response accuracy and ignore tone/safety
  • C. Define measurable acceptance criteria across accuracy, appropriateness, traceability, and compliance
  • D. Rely on vendor assurances for compliance and skip internal traceability

Best answer: C

What this tests: Framework Structure and Controls

Explanation: The exhibit shows every required acceptance-criteria dimension is undefined or lacks evidence (e.g., “TBD”, no decision log, and no compliance testing). Under AIPGF assurance, acceptance criteria must be explicit and testable before a gate decision. The appropriate action is to define and obtain approval for criteria that cover accuracy, appropriateness, traceability, and compliance, along with how each will be evidenced for audit sampling.

Acceptance criteria for AI-assisted deliverables are part of assurance evidence: they make “done/acceptable” objective and auditable. In the exhibit, the gate is rightly “Pending” because targets are TBD and the proposed evidence is insufficient (e.g., “keep prompts in tool” without a decision log or reproducible trace, and “no PII” without testing evidence).

Define and agree acceptance criteria that are measurable and evidenced, for example:

  • Accuracy: a threshold and sampling method for agent-verified correctness
  • Appropriateness: tone/safety checks, red-line content, and HITL review rules
  • Traceability: decision log plus reproducible records (prompt/context, model/version, sources)
  • Compliance: documented checks showing PII handling and policy adherence

This enables a defensible go/no-go decision and supports Internal Audit sampling.

The gate cannot be closed until explicit, testable acceptance criteria and evidence are agreed for all four required dimensions.


Question 8

Topic: Framework Structure and Controls

A public-sector team is using a GenAI assistant to help draft case notes and recommended next steps for a benefits-eligibility process. The solution will go live next month, and internal audit has stated the primary concern is strict auditability: the organization must be able to show when AI was used, what was approved, and how risks were identified and treated.

Which is the minimum set of governance artifacts that best demonstrates control in this situation?

  • A. AI Assistance Plan, decision log, and risk log
  • B. Benefits register and stakeholder communications plan
  • C. Data inventory and model performance dashboard
  • D. Policy statement on AI ethics and end-user training materials

Best answer: A

What this tests: Framework Structure and Controls

Explanation: Strict auditability requires traceable evidence of how AI is intended to be used, who made/approved key decisions, and how risks were managed. An AI Assistance Plan defines permitted use and controls, a decision log records accountable approvals and changes, and a risk log tracks identification, ownership, and treatment through go-live.

The core concept is selecting a minimum evidence set that proves governance control under audit. With strict auditability as the main discriminator, you need artifacts that create end-to-end traceability across (1) AI usage boundaries and controls, (2) accountable decision-making, and (3) risk management.

A practical minimum set is:

  • AI Assistance Plan to document what AI is used for, constraints (e.g., HITL checks), and required approvals.
  • Decision log to evidence decision rights, approvals, and when/why the approach changed.
  • Risk log to show identified AI risks, owners, mitigations, and residual risk acceptance.

Other artifacts may be useful, but without these three, you cannot reliably demonstrate controlled, auditable use of AI in a high-scrutiny context.

These three artifacts provide auditable evidence of intended AI use/controls, recorded approvals, and tracked risk treatment.


Question 9

Topic: Framework Structure and Controls

A contact-centre team ran a 6-week pilot using GenAI to draft customer email responses. The expected benefits were a 20% reduction in average handling time (AHT) and no increase in rework (emails reopened within 48 hours). Anecdotal feedback is positive, but the operations director wants evidence that will both verify expected value and highlight any benefit shortfalls early before scaling to all teams.

Which artifact/evidence best meets this need?

  • A. AI Assistance Plan describing approved use cases, prompts, and human review steps
  • B. RACI matrix confirming accountability for drafting, review, and escalation
  • C. Updated Benefits Tracker showing baseline, targets, actuals, variance, and corrective actions
  • D. Decision log documenting the pilot approval and the scale-up recommendation

Best answer: C

What this tests: Framework Structure and Controls

Explanation: The best validation is an updated Benefits Tracker that compares baseline performance to target benefits and tracks actuals over time. By making variances explicit and linking them to owners and actions, it provides an early warning mechanism for benefit shortfalls ahead of wider rollout.

In AIPGF governance, benefits need objective, ongoing verification—not just positive sentiment—especially when scaling AI assistance. A Benefits Tracker is purpose-built to demonstrate whether expected value is being realized and to surface shortfalls early through trend and variance tracking.

A fit-for-purpose Benefits Tracker entry would include:

  • Clear benefit measures (e.g., AHT, reopen/rework rate)
  • Baseline, target, and measurement cadence
  • Actuals (weekly/biweekly) with variance and thresholds
  • Named owners and agreed corrective actions when off-track

Other governance artifacts can support control and accountability, but they do not validate realized value or provide early detection of benefit underperformance.

A Benefits Tracker provides quantified baselines vs targets with ongoing actuals and variance actions, enabling early detection of benefit shortfalls.


Question 10

Topic: Framework Structure and Controls

A project is at an upcoming governance gate to move a GenAI-enabled service desk assistant from pilot into business-as-usual. The delivery lead says, “We used an LLM across the team to speed up analysis and draft responses,” but cannot describe the exact use, data shared, or who approved it. As the assurance lead, what should you ask for FIRST to assemble evidence that AI assistance is controlled?

  • A. Updated benefits-realisation figures showing time saved by the assistant
  • B. The documented AI use scope and risk-tier decision, with accountable approvals
  • C. A detailed transcript of all prompts and model outputs from the pilot
  • D. The LLM vendor’s latest independent security certification report

Best answer: B

What this tests: Framework Structure and Controls

Explanation: At a governance gate, the first evidence to request is the approved definition of AI assistance in scope and its risk tier, because this establishes what “controlled use” means for this project. Without a recorded scope and classification (and the accountable approvals), you cannot judge whether the right HITL controls, logging, and acceptance evidence are required or missing.

Gate assurance starts with traceable clarity on what AI assistance is being used for and how it is governed. In this scenario, the team cannot yet state the scope (tasks and decision points), data exposure, or approval chain, so the most important first evidence is the documented AI use scope and risk-tier decision with named accountable approvals.

Once that exists, it drives what additional gate evidence is required (for example, HITL decision authority, acceptance criteria for AI-assisted outputs, logging/decision records, and any data-rights constraints). Asking for detailed operational artifacts (like full prompt transcripts) or third-party reports too early can waste time and still leave the project unable to demonstrate that the use was authorised and appropriately controlled.

You first need the approved record of what AI is used for and its risk tier to determine and evidence the required controls at the gate.

Continue with full practice

Use the AIPGF Practitioner Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Practitioner guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026