AIPGF Foundation: Values, Behaviours, and Culture

Try 10 focused AIPGF Foundation questions on Values, Behaviours, and Culture, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeAIPGF Foundation
Topic areaValues, Behaviours, and Culture
Blueprint weight12%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Values, Behaviours, and Culture for AIPGF Foundation. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 12% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Values, Behaviours, and Culture

A portfolio PMO has rolled out AI tools to support multiple projects (drafting reports, requirements, and risk logs). In the last three projects, assurance has found the same issues: AI-generated deliverables released without HITL approval, prompts and sources not recorded, one team used an unapproved public tool, and stakeholders report declining trust after seeing conflicting AI outputs. Benefits tracking also shows less time saved than forecast.

In the latest retrospective, team members say, “We fixed these issues locally, but we never see governance guidance updated for the next project.”

Which is the most likely underlying cause of this repeated AI governance breakdown?

  • A. Retrospectives are not being used to update and standardise AI governance practices across projects
  • B. Stakeholders distrust AI because outputs sometimes conflict
  • C. HITL sign-off is being skipped on AI-assisted deliverables
  • D. Teams are failing to document prompts, sources, and assumptions

Best answer: A

What this tests: Values, Behaviours, and Culture

Explanation: The key clue is that the same control failures recur across several projects even after local fixes. That indicates governance is not adapting based on retrospectives and reviews, so lessons are not being translated into updated artefacts, decision rights, and minimum controls for the next teams. Using reviews to refine governance is a practical expression of the framework’s Adaptability and culture of learning.

When the same AI governance symptoms repeat across projects, the root cause is usually not one missed control but a weak learning-and-improvement mechanism. In AIPGF terms, retrospectives and reviews (including post-stage reviews and assurance findings) should feed back into governance so that practices evolve and become consistent across the programme/portfolio.

In this scenario, teams report fixing issues “locally” but not seeing guidance change, which signals a missing cross-project feedback loop, such as:

  • No scheduled review of AI Assistance Plans, decision logs, and assurance findings
  • No owner/accountability for updating standard controls (e.g., HITL sign-off, prompt logging, approved tools)
  • No communication/enablement so later projects reuse the improved approach

The takeaway is that continuous improvement must be institutionalised, not left to individual teams.

The recurring pattern across multiple projects points to a missing feedback loop where reviews refine portfolio-level governance, not isolated execution errors.


Question 2

Topic: Values, Behaviours, and Culture

In the APMG AI Project Governance Framework (AIPGF), what does pragmatic sensibility mean?

  • A. Completing every AIPGF artifact for every project to ensure compliance
  • B. Tailoring governance to achieve controlled AI use, trust, and accountability
  • C. Maximising delivery speed by minimising approvals and oversight
  • D. Applying the same governance rigor regardless of AI risk or context

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: Pragmatic sensibility is the framework’s stance against “checkbox governance.” It means applying proportionate governance that produces demonstrable outcomes: AI use is controlled, stakeholders can trust outputs, and decision rights and accountability are clear. The emphasis is on evidence and assurance that fit the project’s AI risk and context, not on completing paperwork for its own sake.

In AIPGF, pragmatic sensibility is about using governance as a means to an end: ensuring AI-assisted work is controlled, trusted, and accountable. It discourages “checkbox” behaviour where teams produce artifacts or follow steps simply to claim compliance.

Practically, this means selecting the minimum set of governance activities and artifacts that create appropriate assurance for the project’s AI usage, such as:

  • clear decision rights and accountable approvers for AI-assisted outputs
  • evidence of controls (e.g., transparency, HITL review where needed)
  • tailoring depth of governance to AI risk, impact, and organisational maturity

The key takeaway is proportionality and outcomes over volume of documentation.

It emphasises outcome-focused, proportionate controls and evidence rather than completing governance activities as a formality.


Question 3

Topic: Values, Behaviours, and Culture

You are in the Activation stage of a programme and used an approved internal AI assistant to draft a steering committee paper summarising schedule slippage, top risks, and the proposed recovery plan.

Constraints:

  • The paper is classified Confidential and must not be shared outside the organisation.
  • AIPGF requires HITL: any AI-assisted content for governance forums must be verified against source records and have the rationale documented before submission.
  • Stakeholder trust is fragile after a recent AI-generated status error.
  • Tomorrow is the governance gate where the steering committee will decide whether to approve the recovery plan.

What is the BEST next action?

  • A. Discard the AI draft and rewrite manually, skipping the documentation to meet the deadline
  • B. Submit the paper with an “AI-generated draft” disclaimer to save time
  • C. Recreate the paper using a public generative AI to cross-check accuracy
  • D. Verify all AI statements against baselined data, document prompts/rationale and checks, then route for HITL approval before submission

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: Accountability in AI-assisted delivery means challenging assumptions, verifying outputs against authoritative sources, and documenting the rationale and evidence trail for approvers. With a governance gate imminent and low stakeholder trust, the safest and most compliant action is to complete verification and record what was checked, then obtain the required HITL approval before submission.

In AIPGF, accountability behaviours during Activation focus on ensuring AI-assisted deliverables are trustworthy and auditable before they influence decisions. Given the HITL requirement and the upcoming governance gate, you should treat the AI draft as unverified working material: challenge its assumptions, validate each claim (dates, variances, risks, recovery actions) against baselined plans and logs, and capture an evidence trail.

Practically, this means:

  • Cross-checking the draft against the schedule baseline, status reports, RAID log, and change decisions
  • Documenting prompts, key assumptions, verification steps, and any corrections in the agreed artifact (e.g., decision log / AI Assistance Plan)
  • Routing the final version for the required human review and approval before submission

This both meets the governance control and helps rebuild stakeholder trust through transparency and demonstrated due diligence.

It applies HITL accountability by validating outputs, recording the rationale/evidence, and obtaining the required approval before the governance gate.


Question 4

Topic: Values, Behaviours, and Culture

A programme team uses generative AI to draft user stories and produce meeting-minute summaries. The current governance requires a weekly assurance panel to approve each AI-assisted deliverable, causing a growing queue; some team members have started using the tool “off-book” to meet sprint deadlines.

You are asked to recommend whether governance is too heavy or too light and adjust it pragmatically. Which artifact would provide the BEST evidence base to tailor the controls proportionately while maintaining trust and assurance?

  • A. AIPG-CMM assessment of organisational AI governance maturity
  • B. Benefits Tracker showing time saved from AI assistance
  • C. AI Assistance Plan with risk-tiered controls and HITL checkpoints
  • D. RACI chart for who creates and reviews deliverables

Best answer: C

What this tests: Values, Behaviours, and Culture

Explanation: To right-size governance you need evidence that connects each AI-assisted activity to its risk level and the corresponding assurance controls. An AI Assistance Plan captures the intended AI use, data sensitivity, approval approach, and HITL checkpoints, making it possible to justify simplifying reviews where risk is low and strengthening them where risk is higher.

Governance being “too heavy” often shows up as delays, workarounds, and shadow AI use; “too light” shows up as unclear decision rights, unreviewed AI outputs, or unmitigated risks. The most useful evidence for adjusting governance is an artifact that explicitly ties AI use cases to risk and documents the proportionate controls to apply.

An AI Assistance Plan provides that validation by recording, for each AI-assisted deliverable:

  • what the AI is used for and what data is involved
  • the risk/impact level and required approvals
  • the HITL review points and acceptance criteria
  • monitoring/feedback expectations and review cadence

This creates an auditable rationale to streamline or strengthen controls without losing transparency or accountability.

It links specific AI use cases to risk and documents the tailored control set (including HITL), providing an auditable basis to lighten or tighten governance.


Question 5

Topic: Values, Behaviours, and Culture

You are in the Foundation stage of a programme using an AI assistant to help draft workshop notes, requirements summaries, and stakeholder emails.

Constraints:

  • Only the organisation’s approved internal AI tool may be used.
  • Client information is classified Confidential and must not be pasted into AI prompts.
  • A HITL review is mandatory before any AI-assisted output is shared externally.
  • The sponsor has asked for clear norms to protect stakeholder trust before the Activation gate.

The team has a draft AI Assistance Plan, but it does not yet define review practices, confidentiality handling, or prompt hygiene. What is the best next action?

  • A. Run a team working session to define AI norms and update the AI Assistance Plan, then obtain gate approval
  • B. Allow any generative AI tool if no client names are included in the prompt
  • C. Begin using the AI tool now and capture lessons learned after the first deliverables are sent
  • D. Ask the AI tool to draft requirements using full client documents, then have humans lightly edit

Best answer: A

What this tests: Values, Behaviours, and Culture

Explanation: Before Activation, the priority is to set responsible ways of working so AI assistance is predictable, auditable, and trusted. Updating the AI Assistance Plan with team-agreed norms (review workflow, confidentiality rules, and prompt hygiene) operationalises HITL and the approved-tool/data constraints. Getting the plan approved at the gate establishes clear accountability and decision rights before outputs are produced.

In the Foundation stage, AIPGF governance focuses on setting clear, shared norms for how AI will be used so the team can work safely and consistently. Given the constraints (approved tool only, Confidential data restrictions, mandatory HITL, and a sponsor trust concern), the best next action is to formalise and agree the operating rules and evidence in the AI Assistance Plan and take it through the Activation gate.

Team norms to capture typically include:

  • What “HITL review” means in practice (who reviews, what checks, and approval authority)
  • Confidentiality and data-handling rules (no pasting Confidential content; use summaries/synthetic data)
  • Prompt hygiene (avoid sensitive identifiers, use secure templates, record key prompts/assumptions)

This aligns to Human-Centricity (humans remain accountable), Transparency (documented usage and reviews), and Adaptability (norms can be updated as learning emerges) without breaching stated constraints.

This establishes agreed review, confidentiality, and prompt-hygiene norms with clear approvals before Activation, meeting HITL and tool/data constraints.


Question 6

Topic: Values, Behaviours, and Culture

A project team has piloted an AI assistant to draft weekly status reports and action logs. Uptake is low, and stakeholder feedback includes: “I don’t trust what it produces,” “Who is accountable if it’s wrong?”, “It adds extra checking work,” and “Could it expose confidential data?”

At the next AIPGF Activation gate, the PMO asks for evidence that the team has identified the main sources of resistance to adoption and has planned targeted responses.

Which artifact best validates this readiness?

  • A. Clarification Questions Log capturing stakeholder concerns and agreed responses
  • B. Risk register listing “resistance to AI” with a generic training mitigation
  • C. Decision log documenting the AI tool selection and configuration choices
  • D. Benefits Tracker showing hours saved and forecast adoption rates

Best answer: A

What this tests: Values, Behaviours, and Culture

Explanation: To validate readiness to manage adoption resistance, you need evidence that the team has surfaced and understood stakeholders’ underlying concerns. A Clarification Questions Log makes those resistance drivers explicit (e.g., trust, accountability, added effort, data/IP worries) and records agreed actions to address them, supporting Human-Centricity and Transparency.

A common cause of failed AI adoption is treating resistance as a “communications problem” instead of diagnosing the underlying concerns. In AIPGF terms, readiness to scale AI assistance should be supported by evidence that the team has actively elicited and validated stakeholder concerns, then translated them into targeted responses (e.g., HITL checking expectations, accountable approvers, data handling rules, training).

A Clarification Questions Log is purpose-built to capture:

  • What users and stakeholders are worried about
  • Why they are worried (root cause)
  • What assurance/control or change will address it
  • Who agreed and when

This provides concrete, auditable evidence of the sources of resistance and the plan to reduce them, rather than just reporting outcomes or technical decisions.

It directly evidences the specific trust, accountability, workload, and confidentiality concerns driving resistance and how they will be addressed.


Question 7

Topic: Values, Behaviours, and Culture

A small internal change project plans to use a third-party generative AI tool to draft meeting minutes and first-pass stakeholder emails. No customer data is involved, but some content may reference internal plans.

The sponsor asks for proportionate evidence, before moving from Foundation to Activation, that the team is ready to use AI safely and consistently (including Human-Centricity and HITL review).

Which artifact/evidence best validates this readiness in context?

  • A. A Clarification Questions Log capturing stakeholder concerns and Q&A from workshops
  • B. A Benefits Tracker showing time saved by using AI on early drafts
  • C. A completed AIPG-CMM assessment for the whole organisation’s AI maturity
  • D. An approved AI Assistance Plan defining permitted data, HITL reviews, and accountable approvers

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: Readiness evidence should show that the AI use is governed in a practical, context-specific way before Activation. An approved AI Assistance Plan documents what AI is allowed to do, what data can be used, how HITL review happens, and who can approve AI-assisted outputs. That is the most direct, proportionate validation that controls are defined and owned.

Pragmatic sensibility in AIPGF means using just enough governance to create trust and control for the actual risk and AI use case, not producing impressive but indirect artefacts. In this scenario, the key readiness question is whether the team has agreed boundaries and accountabilities for using a third-party AI tool with internal information.

An AI Assistance Plan is purpose-built to validate this: it can evidence permitted/prohibited inputs, required Human-in-the-Loop checks before sending communications, transparency expectations (e.g., disclosure or labelling), and named approvers/decision rights. This provides auditability and a clear basis for consistent behaviour during Activation, unlike artefacts that measure outcomes or capture discussion without formalised control ownership.

This directly evidences agreed, risk-based controls and decision rights for the specific AI use before Activation.


Question 8

Topic: Values, Behaviours, and Culture

You are reviewing a draft AI Assistance Plan for a programme using generative AI to draft status reports and requirements summaries.

Exhibit: AI Assistance Plan excerpt (draft)

Section: Values
- Core values: Human-Centricity, Transparency, Adaptability, Curiosity
- Purpose: "These values are the mandatory stage-gate checks in Activation."
Action owner: PMO analyst

Which is the BEST next action based on AIPGF Core Values and their purpose?

  • A. Remove the Values section because values apply only to technical AI development teams
  • B. Replace the list with the five Core Values and restate them as behavioural/cultural guides for AI use and decisions
  • C. Keep the purpose statement and move the list to the Principles section
  • D. Keep the list and add Sensibility to complete the Core Values

Best answer: B

What this tests: Values, Behaviours, and Culture

Explanation: The exhibit confuses AIPGF Principles with Core Values and misstates what values are for. The five Core Values are Accountability, Sensibility, Collaboration, Curiosity, and Continuous Improvement, and they exist to shape behaviours and culture so people make better governance decisions when using AI in projects.

In AIPGF, Core Values are not the same as the Principles (Human-Centricity, Transparency, Adaptability). The Core Values—Accountability, Sensibility, Collaboration, Curiosity, and Continuous Improvement—are used to set expectations for how people behave when selecting, using, approving, and learning from AI-assisted outputs across the life cycle.

Given the excerpt, the best next action is to correct both:

  • the list (replace the Principles with the five Core Values), and
  • the purpose statement (values guide culture/behaviours and decision-making; they are not “stage-gate checks”).

This strengthens consistent governance behaviours rather than turning values into a procedural checklist.

AIPGF Core Values are Accountability, Sensibility, Collaboration, Curiosity, and Continuous Improvement, and they guide behaviours and culture rather than acting as stage-gates.


Question 9

Topic: Values, Behaviours, and Culture

During the Activation stage of a business change programme, a workstream starts using a public generative AI tool to draft deliverables (stakeholder updates, requirements summaries, and draft risks). Within a month, assurance finds: no HITL sign-off recorded on AI-assisted outputs, prompts and sources are undocumented, the tool was never approved in the AI Assistance Plan, stakeholders now distrust the outputs after several corrections, and expected time-saving benefits have not materialised due to rework.

Which is the most likely underlying cause of this governance breakdown?

  • A. HITL approvers were too busy to review AI-assisted deliverables
  • B. Stakeholders are generally opposed to AI use in projects
  • C. The AI tool cannot retain prompts or produce an audit trail
  • D. An unresolved value conflict prioritised curiosity and speed over responsible risk management

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: The symptoms point to a culture and behaviour issue: the team acted as if rapid experimentation mattered more than governing risk, transparency, and human accountability. When curiosity/speed overrides agreed Core Values, people skip HITL sign-off, avoid documenting prompts, and use unapproved tools, creating rework and eroding trust. This is a value conflict that was not surfaced and resolved.

In AIPGF terms, this is best diagnosed as a values-and-behaviours failure: an unaddressed tension between “move fast/experiment” and “protect people, data, and trust.” When curiosity or delivery pressure dominates, teams commonly bypass Human-Centricity (HITL accountability) and Transparency (documented prompts/sources and clear approvals), which then drives stakeholder distrust and benefit shortfalls through rework.

A responsible resolution is to make the value trade-off explicit and re-balance behaviours, for example:

  • Reconfirm risk appetite and decision rights for AI-assisted outputs
  • Reinstate HITL sign-off and prompt/source documentation as non-negotiables
  • Update/enforce the AI Assistance Plan (approved tools and permitted uses)

The key takeaway is that repeated control bypasses across multiple areas usually indicate a cultural/value conflict, not a single process bottleneck or tool limitation.

The pattern of bypassed approvals, weak transparency, and rework is best explained by behaviours that reward experimentation over Human-Centric, transparent assurance.


Question 10

Topic: Values, Behaviours, and Culture

You are the project manager for a customer-facing digital programme in the Activation stage. The team wants to trial a generative AI assistant to turn internal meeting notes into a weekly stakeholder highlight report.

Constraints: the notes can include commercially sensitive information and occasional personal data; no AI tool is yet approved for processing such content; the sponsor wants time savings within two weeks. You want to demonstrate “curiosity” (test safely, ask better questions, learn limitations) while maintaining AIPGF governance (Human-Centricity, Transparency, HITL).

Which action best balances ethical, efficient, and effective AI use with required controls?

  • A. Stop all trials until an enterprise-approved AI tool exists
  • B. Use a public chatbot on full notes; rely on quick PM review
  • C. Procure and automate summaries now to maximize time savings
  • D. Run a redacted-data pilot, document limits, keep HITL sign-off

Best answer: D

What this tests: Values, Behaviours, and Culture

Explanation: Curiosity in AIPGF means learning through controlled experimentation, not uncontrolled exposure of sensitive data. A redacted, time-boxed pilot with clear evaluation questions, documented limitations, and HITL approval supports Transparency and Human-Centricity while still delivering evidence quickly. It also creates auditability for a later go/no-go decision on wider use.

The core behaviour is “safe curiosity”: probe what AI can and cannot do while keeping governance proportionate to the risk of the use case. Here, the risk is driven by sensitive and personal data, uncertainty in AI outputs (hallucination/omission), and unclear decision rights for publishing AI-assisted content.

A balanced approach is to run a short pilot that:

  • Uses synthetic/redacted inputs (minimise data exposure)
  • Sets test questions and success criteria (learn limitations deliberately)
  • Records results and constraints in an AI Assistance Plan/decision log (Transparency)
  • Keeps a human accountable for review/approval before sharing externally (HITL, Human-Centricity)

This achieves learning and near-term efficiency without bypassing necessary safeguards or creating unmanaged organisational risk.

It tests capabilities safely using controlled data, captures limitations transparently in an AI Assistance Plan/decision log, and preserves accountable human review before release.

Continue with full practice

Use the AIPGF Foundation Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the AIPGF Foundation guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026