AIPGF-F Syllabus — Learning Objectives & Topic Map

Blueprint-aligned AIPGF-F learning objectives with quick links to targeted practice by topic.

Use this syllabus as your AIPGF-F coverage checklist. Practice immediately after each section.

What’s covered

Module 1: Foundations of AI Project Governance (AIPGF) (12%)

AIPGF purpose, scope, and terminology

  • Define the AI Project Governance Framework (AIPGF) and its purpose in governing AI use in projects and programmes.
  • Define an AI-assisted project/programme and distinguish AI assistance in project management from AI used in project deliverables.
  • Explain what ethical, efficient, and effective human-AI collaboration means in a project context.
  • Identify what is in-scope for AIPGF governance (AI tools, prompts, data used, AI-assisted decisions, AI-assisted deliverables).
  • Distinguish project-level AI governance from enterprise AI governance and from general project governance.
  • Recognize primary drivers for adopting AIPGF (risk mitigation, compliance alignment, stakeholder trust, project success).
  • Identify common AI governance failure modes in projects (unreviewed outputs, unclear accountability, unmanaged data exposure, overreliance on AI).

AI-assisted projects and human-AI collaboration

  • Describe common ways AI tools assist project work (e.g., drafting a business case/charter, risk assessment, stakeholder mapping, communications).
  • Recognize common AI tool categories used in projects (LLMs, copilots, AI agents, AI project management tools) and what must be governed.
  • Explain the Human-in-the-Loop (HITL) rule and when human review/approval is mandatory.
  • Differentiate human-in-the-loop vs human-on-the-loop oversight and select oversight appropriate to risk/impact.
  • Identify typical AI output risks relevant to projects (hallucination, bias, confidentiality/IP leakage, unsafe recommendations) and their governance implications.
  • Determine when AI assistance is inappropriate for a task due to confidentiality, safety, or decision-criticality.
  • Recognize AI usage red flags in a project (inconsistent outputs, untraceable sources, stakeholder concern) and decide when to escalate.

Governance gap and standards alignment

  • Explain the governance gap between high-level AI standards/regulations and day-to-day project execution.
  • Describe how AIPGF complements (not replaces) ISO/IEC 42001 (AI management systems) in a project environment.
  • Describe how AIPGF complements the NIST AI Risk Management Framework (AI RMF) through practical project-level controls.
  • Recognize where regulations (e.g., EU AI Act) influence project governance expectations (risk classification, documentation, oversight).
  • Identify organizational policies that should be activated in an AI-assisted project (data protection, security, acceptable use, procurement).
  • Map a high-level governance requirement to a practical project action or artifact (e.g., approval gate, decision log, audit trail).
  • Explain how governance improves stakeholder trust through transparency and accountability.

Integration with project methods and scalability

  • Explain why AIPGF is methodology-independent and can be integrated with Agile, PRINCE2, PMBOK, and hybrid approaches.
  • Map AIPGF life cycle stages (Foundation, Activation, Evaluation) to a chosen project life cycle (initiate/plan/deliver/close).
  • Scale governance depth based on project size, complexity, risk, and AI adoption maturity (avoid over-governing or under-governing).
  • Tailor AI governance activities and artifacts to the project context while preserving HITL and accountability.
  • Identify entry/exit criteria for progressing between AIPGF stages within a project.
  • Determine which AI governance decisions should be made once per project vs revisited iteratively as usage evolves.
  • Differentiate structured governance from bureaucracy and keep governance outcomes-focused (controlled AI use, trust, accountability).

Module 2: AI in Projects and Organizations (Context) (12%)

How AI differs and why it matters

  • Explain how AI differs from traditional deterministic computing and what that implies for governance (probabilistic outputs, uncertainty).
  • Distinguish key AI disciplines relevant to project work (machine learning, natural language processing, generative AI) at a non-technical level.
  • Explain why training data, prompts, and context can materially change AI outputs and risk.
  • Identify common sources of AI uncertainty/error (hallucination, bias, context gaps, model changes) and how governance mitigates them.
  • Recognize limits of AI explainability and how to communicate those limits to stakeholders.
  • Distinguish AI assistance used for ideation/drafting from AI used for decision-making/recommendations and how governance differs.
  • Select an appropriate level of technical detail in governance artifacts for different audiences (team, sponsor, assurance/audit).

AI tools, data, and integration dilemmas

  • Identify decision points for selecting AI tools for a project (capability fit, data handling, access controls, auditability, cost).
  • Evaluate whether a tool can be used with sensitive data based on data residency, confidentiality, and access controls.
  • Recognize risks of shadow AI use in projects and select governance controls to prevent it (approved tool list, training, monitoring).
  • Define data provenance in a project context and explain why it matters for assurance of AI-assisted deliverables.
  • Distinguish between public AI services and enterprise-approved AI environments and their governance implications.
  • Identify governance considerations for prompt libraries and prompt reuse (versioning, approvals, sensitive content).
  • Explain how AI agents/task automation changes governance needs (delegation limits, monitoring, rollback, human approvals).

Stakeholders and decision contexts

  • Identify key stakeholders in AI-assisted projects (sponsor, PMO, legal/compliance, security, data owners, end users) and their concerns.
  • Determine decision authority and escalation paths for AI governance decisions in a project.
  • Align AI use with business objectives and define success metrics suitable for AI-assisted work.
  • Recognize organizational AI adoption maturity indicators (exploring/piloting/scaling/embedded) and how this affects governance depth.
  • Identify common organizational constraints affecting AI use (policies, procurement, regulatory obligations, risk appetite).
  • Manage stakeholder expectations about AI capabilities and limitations to avoid overpromising.
  • Plan stakeholder communications for AI usage benefits and red flags during delivery.
  • Identify common ethical challenges in AI-assisted project work (bias, manipulation, overreliance, accountability gaps).
  • Identify common legal/compliance challenges (privacy, IP/copyright, records retention, sector regulation) and governance responses.
  • Recognize security threats introduced by AI tools (data leakage, prompt injection, model misuse) and required mitigations.
  • Distinguish acceptable and unacceptable uses of AI in project documentation and decision support.
  • Decide when to halt or redesign an AI-assisted approach due to unacceptable risk or lack of assurance.
  • Describe how governance can manage resistance to AI adoption (training, transparency, involvement, phased rollout).
  • Document and justify AI-related decisions in a way that supports auditability and accountability.

Module 3: Framework Structure and Controls (13%)

AIPGF structure and key elements

  • Describe the three AIPGF life cycle stages (Foundation, Activation, Evaluation) and their intent.
  • Identify key AIPGF elements (Principles, Core Values, Core Behaviours, Aims, Activities, Deliverables) and how they relate.
  • Distinguish between aims, activities, and deliverables within an AIPGF stage.
  • Explain how AIPGF supports ethical, efficient, and effective AI assistance across the project life cycle.
  • Identify a minimum set of governance artifacts needed to demonstrate control (e.g., AI Assistance Plan, decision log, risk log).
  • Explain how AIPGF provides structured scalability for projects of varying size, complexity, and risk.
  • Map AIPGF elements to familiar project governance concepts (business case, stage gates, controls, assurance).

Governance controls and mechanisms

  • Establish governance objectives and scope for AI assistance in a project (what AI is allowed to do and where it is used).
  • Define and maintain an approved AI toolset for a project (selection criteria, approvals, exceptions).
  • Define data handling rules for AI usage in the project (classification, access, retention, sharing).
  • Define human oversight controls (HITL checkpoints, review responsibilities, approval thresholds) for AI-assisted outputs.
  • Define traceability controls for AI assistance (prompt logging, source recording, versioning, decision rationale).
  • Define issue/incident controls for AI usage (reporting, containment, rollback, stakeholder notification).
  • Define training and enablement controls so the team can use AI responsibly (guidelines, competency checks).

Governance artifacts and templates

  • Create an AI Assistance Plan that states objectives, scope, approved tools, data rules, oversight, and monitoring approach.
  • Use a Clarification Questions Log to reduce ambiguity and prevent AI-generated assumptions in key artifacts.
  • Use stakeholder mapping and stakeholder insights analysis to identify who is impacted by AI use and what they need to know.
  • Use a Benefits Tracker to verify expected value from AI assistance and detect benefit shortfalls early.
  • Use a decision log to record AI-related governance decisions, rationale, approvals, and follow-up actions.
  • Use a risk register (including residual risk assessment) to track AI-related risks, controls, and owners.
  • Use a Business Case Review Checklist and Business Case Assumptions Log to validate AI-enabled proposals and guard against weak evidence.

Assurance and evidence

  • Identify evidence required at governance gates to demonstrate AI assistance is controlled (artifacts, approvals, logs).
  • Select appropriate metrics to monitor AI effectiveness and risk during delivery (quality checks, review outcomes, incidents).
  • Define acceptance criteria for AI-assisted deliverables that include accuracy, appropriateness, traceability, and compliance.
  • Plan audit-ready documentation for AI usage (what to capture, where to store, who can access).
  • Distinguish between activity evidence (work done) and assurance evidence (controls effective) in AI governance.
  • Decide when independent review or assurance is needed based on risk/impact and stakeholder expectations.
  • Communicate assurance findings and required actions to stakeholders in clear project language.

Module 4: Roles, Responsibilities, and Accountabilities (13%)

Key roles and responsibilities

  • Identify core AIPGF roles and responsibilities needed to govern AI usage in a project or programme.
  • Assign AI governance responsibilities to existing project roles (e.g., PM, sponsor, PMO) without creating unnecessary new roles.
  • Explain how one person can fulfil multiple roles in AI governance and project management while avoiding conflicts of interest.
  • Distinguish between oversight/assurance roles and delivery roles for AI assistance.
  • Define responsibilities for approving AI tools and data access in the project.
  • Define responsibilities for reviewing and signing off AI-assisted deliverables (HITL accountability).
  • Identify when to involve specialist roles (legal/compliance, security, data protection, auditors) based on context.

RACI, decision rights, and governance gates

  • Build a RACI or responsibility matrix for AI governance activities across AIPGF stages.
  • Define decision rights for AI-related approvals (tool approval, data approval, go/no-go) and document them.
  • Use governance gates to control when AI can be introduced, scaled, or operationalised within the project.
  • Establish escalation paths for AI-related issues and define triggers for escalation.
  • Prevent accountability gaps by linking AI-assisted outputs to a named human owner and approver.
  • Resolve role ambiguity by clarifying who decides, who advises, and who assures.
  • Evaluate whether governance decisions are being made at the right level (project/programme/portfolio).

Stakeholder engagement and communications

  • Identify stakeholders impacted by AI assistance and assess their influence, interest, and risk concerns.
  • Use stakeholder resistance mapping to anticipate objections and plan mitigation actions.
  • Plan stakeholder engagement meetings and communication strategies for AI usage decisions and changes.
  • Align stakeholder expectations about AI capabilities, limitations, and responsibility boundaries.
  • Communicate AI usage benefits and red flags during delivery to maintain trust.
  • Select communication content and frequency appropriate to project risk tier and stakeholder needs.
  • Validate stakeholder understanding of AI governance decisions through feedback and review.

Third parties, vendors, and procurement

  • Assess governance risks when using third-party AI tools or vendors in a project (data handling, IP, security, auditability).
  • Define procurement requirements for AI tools (contractual controls, SLAs, incident response, transparency obligations).
  • Verify that third-party tool usage aligns with organizational policy and regulatory requirements.
  • Manage data sharing and access with external parties using least privilege and clear approvals.
  • Document third-party responsibilities and escalation paths within the project governance structure.
  • Plan contingency actions if a vendor/tool becomes unavailable or non-compliant (exit plan, alternative tool).
  • Ensure auditability and traceability when third parties contribute to AI-assisted deliverables.

Module 5: Principles for Responsible and Trustworthy AI (13%)

Principle: Human-Centricity

  • Define the AIPGF principle of Human-Centricity and its implications for AI assistance in projects.
  • Apply Human-Centricity by ensuring humans remain accountable for decisions and AI-assisted deliverables.
  • Select appropriate HITL checkpoints based on task criticality and risk.
  • Recognize signs of overreliance on AI and implement controls to maintain human judgment.
  • Ensure AI assistance augments team capability rather than replacing necessary expertise.
  • Use human review to detect and correct AI errors, bias, or unsafe recommendations.
  • Decide when to restrict or disable AI assistance to preserve safety, ethics, or compliance.

Principle: Transparency

  • Define the AIPGF principle of Transparency (explainable, auditable AI use) in a project context.
  • Document AI usage so stakeholders can understand where AI was used and what was produced.
  • Ensure AI-assisted recommendations include rationale and evidence, not just output text.
  • Establish traceability for AI-assisted deliverables (versioning, logs, sources, approvals).
  • Communicate limitations and uncertainty of AI outputs to stakeholders to avoid false confidence.
  • Use transparency to support audits, reviews, and post-incident investigations.
  • Identify transparency anti-patterns (undocumented prompts, hidden AI assistance) and correct them.

Principle: Adaptability (tailoring)

  • Define the AIPGF principle of Adaptability and why governance must scale with context.
  • Tailor AIPGF activities and deliverables based on project size, complexity, risk, and AI adoption maturity.
  • Decide what to simplify for low-risk projects while maintaining core controls (HITL, accountability, basic documentation).
  • Increase governance rigor for high-impact or regulated contexts (more evidence, independent review, stricter gates).
  • Adapt governance when AI usage changes during the project (new tools, new tasks, expanded scope).
  • Integrate AIPGF with different delivery approaches (Agile iterations vs stage-gated delivery).
  • Balance governance rigor with delivery efficiency (avoid bureaucracy that blocks value).

Applying principles to decisions and tradeoffs

  • Use the three principles together to evaluate a proposed AI usage approach in a project.
  • Differentiate ethical, efficient, and effective AI use and identify tradeoffs in a scenario.
  • Select actions that improve stakeholder trust without unnecessarily slowing delivery.
  • Resolve conflicts between speed and governance by adjusting scope, controls, or approvals.
  • Identify when transparency requirements override convenience (e.g., auditability for key decisions).
  • Identify when human-centricity requirements override automation (e.g., high-impact approvals).
  • Identify when adaptability requires changing governance approach mid-project (maturity changes, new regulation).

Module 6: Values, Behaviours, and Culture (12%)

AIPGF Core Values

  • List the five AIPGF Core Values (Accountability, Sensibility, Collaboration, Curiosity, Continuous Improvement) and explain their purpose.
  • Apply Accountability by assigning ownership and documenting AI-related decisions and approvals.
  • Apply Sensibility by balancing AI outputs with human judgment and project context.
  • Apply Collaboration by coordinating across project, governance, and specialist roles for AI usage.
  • Apply Curiosity by exploring AI innovations responsibly with controlled experimentation.
  • Apply Continuous Improvement by reviewing AI usage outcomes and updating practices.
  • Recognize when value conflicts arise (e.g., curiosity vs risk) and resolve them responsibly.

Core Behaviours in practice

  • Demonstrate accountability behaviours (challenge assumptions, document rationale, verify outputs) when using AI.
  • Demonstrate sensibility behaviours (sense-check, validate, avoid overconfidence) for AI-assisted deliverables.
  • Demonstrate collaboration behaviours (share learnings, agree standards, align expectations) around AI usage.
  • Demonstrate curiosity behaviours (test safely, ask better questions, learn limitations) without creating uncontrolled risk.
  • Demonstrate continuous improvement behaviours (retrospectives, metrics review, lessons learned, updates).
  • Establish team norms for responsible AI use (review practices, confidentiality, prompt hygiene).
  • Encourage a culture of escalation and learning when AI issues occur (no blame, timely reporting).

Managing resistance and adoption challenges

  • Identify common sources of resistance to AI adoption in project teams and stakeholders.
  • Use transparency and involvement to reduce stakeholder mistrust and fear of AI.
  • Plan training/upskilling and enablement for safe AI usage within the team.
  • Address data issues and tool integration problems that block responsible AI use.
  • Set clear objectives for AI assistance to avoid tool chasing and wasted effort.
  • Manage ethical concerns proactively through governance discussions and documented decisions.
  • Select change management actions that maintain momentum while preserving controls.

Spirit of the framework and pragmatic sensibility

  • Explain what it means to embrace the spirit of the AIPGF rather than rigidly following templates.
  • Apply pragmatic sensibility to decide which governance steps are essential vs optional in context.
  • Avoid checkbox governance by focusing on outcomes: controlled AI use, trust, and accountability.
  • Adapt governance practices as AI tools evolve and project needs change.
  • Use retrospectives and reviews to refine governance practices across projects and programmes.
  • Identify signs governance is too heavy or too light and adjust accordingly.
  • Maintain alignment with organizational policy and external standards while tailoring for project reality.

Module 7: Lifecycle Governance (Initiate to Operate) (13%)

Stage 1: Foundation (initiation and planning)

  • Explain the intent of AIPGF Stage 1 (Foundation) and how it maps to initiation and planning phases.
  • Establish objectives and scope of AI assistance for a project (what AI will and will not do).
  • Select relevant AI tools for the project based on fit, constraints, and governance requirements.
  • Assess data availability, quality, and rights for intended AI usage (including confidential/sensitive data constraints).
  • Enable the team to use AI responsibly (guidelines, training, roles, and HITL checkpoints).
  • Identify and plan responses for AI-related risks during the project (risk register, controls, owners).
  • Produce Stage 1 deliverables such as an AI Assistance Plan and supporting logs (clarification questions, assumptions).

Stage 2: Activation (delivery and control)

  • Explain the intent of AIPGF Stage 2 (Activation) and how it maps to delivery/execution phases.
  • Operationalise the AI Assistance Plan and ensure AI usage follows approved scope and controls.
  • Facilitate ethical, efficient, and effective human-AI collaboration during delivery (HITL in practice).
  • Monitor AI effectiveness and quality of AI-assisted deliverables using agreed metrics and checks.
  • Continue to anticipate and mitigate AI-related risks as project conditions and AI usage evolve.
  • Manage AI-related issues/incidents (containment, correction, communication, escalation, rollback) when they arise.
  • Keep stakeholders informed about AI usage benefits and red flags throughout delivery.

Stage 3: Evaluation (closure and learning)

  • Explain the intent of AIPGF Stage 3 (Evaluation) and how it maps to project closure phases.
  • Evaluate AI impact on outcomes and benefits using evidence such as a Benefits Tracker and stakeholder feedback.
  • Evaluate AI decision-making processes for transparency, fairness, and accountability in the project context.
  • Document and share lessons learned about AI tool selection, usage, and governance effectiveness.
  • Identify improvement actions for future projects (better prompts, better training, better controls).
  • Decide whether to expand, reduce, or stop AI usage based on evaluation findings and risk tolerance.
  • Update organizational guidance (policies, templates, approved tools) based on project evaluation outcomes.

Cross-stage governance and tailoring

  • Explain how the three AIPGF stages form a continuous improvement cycle applied across a portfolio.
  • Tailor stage activities and deliverables to different project contexts while maintaining core controls.
  • Decide when to revisit Stage 1 decisions during Stage 2 due to scope/tool/data changes.
  • Ensure traceability of AI assistance across stages (logs, versioning, approvals, decision records).
  • Coordinate AI governance gates with existing project stage gates and Agile iterations.
  • Balance speed vs governance by adjusting scope, evidence requirements, and review depth.
  • Use stakeholder communication plans to maintain trust across all stages.

Module 8: Assurance, Metrics, and Continuous Improvement (12%)

AIPG-CMM maturity model

  • Define the AI Project Governance Capability Maturity Model (AIPG-CMM) and how it complements the AIPGF.
  • Describe the five AIPG-CMM maturity levels (Ad Hoc, Initialised, Standardised, Enterprised, Optimised).
  • Assess what observable behaviours/artifacts indicate each maturity level in a project environment.
  • Explain why maturity models help benchmark AI governance and guide investment and improvement.
  • Identify which AIPGF elements typically improve as maturity increases (roles clarity, documentation, assurance, culture).
  • Distinguish between AI adoption maturity and AI governance maturity in projects.
  • Select next-step improvement actions based on a maturity assessment result.

Using assessments and improvement roadmaps

  • Use an AIPG-CMM assessment instrument to capture current state across projects/programmes.
  • Interpret assessment results to identify gaps in governance activities, roles, or controls.
  • Prioritize improvement actions to move from one maturity level to the next (policies, roles, audits, training).
  • Define ownership and timelines for governance improvement actions.
  • Track improvement progress using repeat assessments and evidence.
  • Communicate maturity assessment results to leadership and stakeholders in clear terms.
  • Avoid treating maturity scoring as the goal; focus on safer, more trustworthy AI usage outcomes.

Metrics, evidence, and benefits tracking

  • Select governance metrics that indicate AI effectiveness and control (quality checks, review outcomes, incidents).
  • Use a Benefits Tracker to validate value realization from AI assistance and adjust when benefits are not materializing.
  • Distinguish leading indicators (control compliance) from lagging indicators (incidents, rework) in AI governance.
  • Define thresholds and triggers for corrective action based on metrics.
  • Collect evidence that supports auditability (logs, approvals, artifacts) without excessive overhead.
  • Report AI governance performance in ways that support decision-making (risk, trust, compliance).
  • Ensure metrics do not incentivize unsafe behaviour (e.g., speed over assurance).

Continuous improvement at scale

  • Apply continuous improvement by updating templates, guidance, and training based on lessons learned.
  • Share good practices across projects/programmes to raise baseline AI governance maturity.
  • Integrate AI governance improvements with PMO/portfolio governance processes.
  • Establish feedback loops between projects and enterprise AI governance/policy owners.
  • Adapt governance as AI tools evolve (new features, new risks, new regulation).
  • Plan periodic audits/reviews of AI usage in projects and follow up on findings.
  • Build organisational capability for ethical, efficient, and effective human-AI collaboration at scale.

Tip: Drill one section at a time, then mix topics to force transfer.