PSPO-AI Syllabus — Learning Objectives & Topic Map

Blueprint-aligned PSPO-AI learning objectives with quick links to targeted practice by topic.

Use this syllabus as your PSPO-AI coverage checklist. Practice immediately after each section.

What’s covered

AI Theory and Primer (33%)

AI Fundamentals and Terminology

  • Define artificial intelligence (AI) and distinguish it from automation and rules-based systems.
  • Distinguish machine learning, deep learning, and generative AI at a conceptual level.
  • Identify examples of narrow AI versus general intelligence and set realistic expectations for current AI tools.
  • Explain what a large language model (LLM) is and what it is designed to do.
  • Match common AI terms (model, training data, inference, parameters) to their meanings at a practical level.
  • Differentiate classification, regression, clustering, and generation tasks and identify which an AI system is performing.
  • Identify typical inputs and outputs for text, image, and code generation systems and when each is appropriate.
  • Explain why many AI outputs are probabilistic and how that affects repeatability.
  • Describe what ground truth means when evaluating AI outputs and why it matters.
  • Select an AI capability (summarize, translate, brainstorm, categorize) that fits a described need without overreach.

How Generative AI Works (High Level)

  • Describe at a high level how an LLM predicts the next token to generate text.
  • Explain what agentic AI is at a high level and how it differs from simple chat-based assistance.
  • Explain what a context window is and how it limits what the model can consider.
  • Distinguish training from inference and identify what changes (and what does not) at runtime.
  • Recognize why an LLM may hallucinate and identify conditions that make it more likely (missing context, forced specificity).
  • Explain the idea of embeddings and semantic similarity in simple terms.
  • Identify when retrieval-augmented generation (RAG) is useful versus relying on the model alone.
  • Distinguish fine-tuning from prompting and choose the right approach for a stated goal.
  • Explain temperature/creativity settings in plain language and select a suitable setting for accuracy-focused tasks.
  • Describe why models can be sensitive to phrasing and information order in prompts.

AI Capabilities and Limitations

  • Identify tasks where AI excels (drafting, summarizing, pattern spotting) versus tasks where it is risky without oversight (factual verification, policy interpretation).
  • Recognize common AI failure modes: hallucination, bias, stale knowledge, overconfidence, and non-determinism.
  • Evaluate whether an AI output is trustworthy based on evidence, specificity, and verifiability.
  • Choose an appropriate human review step for AI-generated content based on risk level.
  • Distinguish correlation from causation in AI-generated insights and avoid over-interpreting patterns.
  • Identify when an AI suggestion is too generic to be actionable and choose how to request specificity.
  • Recognize when a request is underspecified for AI and choose what additional context is needed.
  • Explain why AI may reflect biases from training data and how bias can appear in outputs.
  • Identify scenarios where AI may amplify groupthink and choose a mitigation (generate alternatives, bring diverse perspectives).
  • Select a safe fallback when AI is uncertain (ask clarifying questions, provide options, defer to subject matter experts).

AI Impact on Work and Teams

  • Identify opportunities AI creates in knowledge work (speed, consistency, ideation) and trade-offs (quality, trust, ethics).
  • Recognize signs of automation bias (over-trusting AI) and choose an appropriate countermeasure.
  • Explain why transparency about AI use matters for trust and collaboration.
  • Select a task for AI augmentation versus a task that should remain human-led in a Scrum context.
  • Recognize when AI use might reduce learning within the team and choose an approach that preserves skill growth.
  • Identify when AI can reduce cognitive load (summaries, checklists) and when it can create noise.
  • Distinguish productivity gains from value delivery and choose metrics that reflect outcomes, not just speed.
  • Identify potential impacts of AI on team dynamics (confidence gaps, dependency) and choose a mitigation.
  • Explain what human-in-the-loop means and apply it to a scenario with AI-generated recommendations.
  • Identify early adoption risks (tool sprawl, inconsistent prompts, data leakage) and select a mitigation strategy.

Selecting and Using AI Responsibly (Primer)

  • Identify when a prompt includes sensitive information and choose how to redact or avoid sharing it.
  • Recognize when a request asks for private or proprietary data and choose an appropriate response.
  • Choose an approach to validate AI-generated statements (cross-check sources, test against data, peer review).
  • Select an output format to reduce ambiguity (bullet list, table, short paragraph) given a task.
  • Recognize when AI output should be labeled as a draft and when it can be used as final with review.
  • Distinguish using AI for ideation from using AI for decision-making and choose appropriate boundaries.
  • Choose a way to capture assumptions and constraints so AI outputs remain aligned with reality.
  • Recognize when AI is being asked to replace professional judgment and choose a safer framing (suggest options).
  • Identify when to stop iterating with AI due to diminishing returns and shift to human work.
  • Explain why context matters more than clever wording and select the minimal necessary context for a prompt.

AI Security and Ethics (33%)

Responsible AI Principles

  • Define responsible AI principles (accountability, transparency, fairness, safety) and match them to examples.
  • Identify ethical risks in AI use (bias, manipulation, exclusion) in workplace scenarios.
  • Choose an action that increases transparency about AI usage without oversharing sensitive details.
  • Recognize when AI use could create unfair advantage or disadvantage among team members and choose a mitigation.
  • Distinguish explainability from correctness and choose what to communicate about AI decisions.
  • Identify when consent is needed for using data in AI prompts and choose an appropriate approach.
  • Recognize when an AI-generated suggestion could harm psychological safety and choose a corrective action.
  • Choose a method to ensure inclusivity in AI-assisted communications (neutral language, accessibility).
  • Identify conflicts of interest or misuse (for example, generating performance reviews) and choose ethical boundaries.
  • Select appropriate escalation paths when AI use raises ethical concerns (manager, security, policy owner).

Privacy and Confidentiality

  • Identify types of sensitive data (PII, PHI, credentials, customer data) and choose safe handling for AI prompts.
  • Choose a safe way to summarize a confidential document using AI without exposing sensitive content.
  • Recognize when anonymization is insufficient due to re-identification risk and choose a safer approach.
  • Identify when to use synthetic examples instead of real data in AI prompts.
  • Select steps to prevent secrets leakage (API keys, passwords) when using AI tools.
  • Recognize retention risks (chat history, logs) and choose a safe usage pattern.
  • Choose appropriate access control for sharing prompts and AI outputs within a team.
  • Recognize when AI output should not be stored in a system of record without review.
  • Identify privacy implications of sending data to third-party services and choose a compliant action.
  • Choose a policy-aligned response when asked to input customer data into an AI assistant.
  • Distinguish public domain, copyrighted, and proprietary content in the context of AI-assisted creation.
  • Identify risks of plagiarism or license violations in AI-generated text and choose mitigations.
  • Choose when to cite sources or provide attribution for AI-assisted outputs in professional settings.
  • Recognize that AI may reproduce memorized text and choose how to avoid copying verbatim.
  • Identify when AI-generated content could create IP ownership ambiguity and choose a safe policy-friendly approach.
  • Select an approach for using AI to rephrase content without changing meaning while avoiding infringement.
  • Recognize when generating code or templates may implicate licensing and choose a mitigation (review, use approved sources).
  • Choose a safe way to include AI-generated diagrams or frameworks in internal materials (label as draft, verify).
  • Identify when AI-generated content could violate a non-disclosure agreement (NDA) and choose an alternative.
  • Choose how to handle AI-generated suggestions that conflict with organizational standards or legal guidance.

Security Threats and Mitigations

  • Define prompt injection and identify it in a scenario involving untrusted content.
  • Recognize data exfiltration attempts through prompts and choose a safe response.
  • Identify social engineering risks when AI outputs appear authoritative and choose verification steps.
  • Choose safe practices when using AI with external inputs (documents, emails) to reduce injection risk.
  • Recognize when an AI tool is requesting excessive permissions and choose a safer configuration.
  • Identify hallucinated security advice and choose to consult authoritative security guidance.
  • Choose an approach to sanitize and validate AI-generated content before publishing or sending to customers.
  • Recognize potential vulnerabilities introduced by AI-generated code snippets and choose review and testing steps.
  • Identify when AI could be used to generate phishing content and choose preventative training or controls.
  • Choose how to report and respond to suspected AI-related security incidents.

Governance, Risk Management, and Compliance

  • Explain why AI governance is needed and identify typical guardrails (approved tools, data rules, review).
  • Identify when AI use must follow regulatory requirements (privacy, financial, health) and choose to consult policy.
  • Choose a risk-based approach to AI usage where high-impact tasks require stronger review and controls.
  • Recognize when auditability matters for AI-assisted decisions and choose what documentation to keep.
  • Identify when to involve legal, security, or privacy stakeholders in AI adoption decisions.
  • Choose an appropriate human accountability statement for AI-assisted outputs.
  • Recognize when AI use conflicts with organizational ethics and choose an escalation or stop action.
  • Identify metrics to monitor AI usage quality and risk (error rates, leakage incidents, user feedback).
  • Choose governance for prompt libraries (ownership, review cycles, deprecation) that reduces risk.
  • Identify when AI policy should be updated based on new risks or incidents and choose next steps.

AI Product Ownership (34%)

Product Vision, Strategy, and Roadmap with AI

  • Choose how to use AI to synthesize market and stakeholder inputs into a draft product vision while keeping Product Owner accountability.
  • Recognize when an AI-generated strategy is too generic and refine it using explicit constraints, differentiation, and intended outcomes.
  • Use AI to draft hypotheses and assumptions behind a proposed product strategy and plan how to validate them.
  • Choose how to use AI to create outcome-based roadmap options without committing to unverified dates or scope.
  • Evaluate tradeoffs in AI-assisted roadmap prioritization using value, risk, evidence, and dependencies.
  • Use AI to extract stakeholder goals and potential conflicts from inputs while preserving nuance and uncertainty.
  • Choose how to document product decision rationale, assumptions, and uncertainty for transparency.
  • Recognize when AI is being used to justify a predetermined decision and choose a more evidence-driven approach.
  • Select safe inputs and redaction practices when using AI to draft product vision or strategy artifacts.
  • Use AI to draft a concise product narrative (problem, users, value) and plan a human review with stakeholders.

Product Discovery and User Understanding with AI

  • Choose how to use AI to summarize user research notes into themes while avoiding over-generalization.
  • Recognize when an AI summary misrepresents minority or edge-case feedback and correct it using evidence.
  • Use AI to draft personas or jobs-to-be-done from existing evidence and decide how to validate them.
  • Choose how to generate interview questions and discovery prompts with AI while avoiding leading questions and bias.
  • Choose how to analyze support tickets and product feedback with AI while protecting customer privacy.
  • Use AI to generate alternative solution options and avoid fixation on the first plausible answer.
  • Use AI to draft experiment ideas and define what to measure to validate learning.
  • Recognize when AI cannot substitute for direct user research and decide when to do human discovery work.
  • Use AI to draft a competitive analysis outline and validate key claims with credible sources.
  • Choose how to communicate discovery insights and uncertainty to the Scrum Team and stakeholders transparently.

Backlog Management and Refinement with AI

  • Choose AI-assisted approaches to rewrite Product Backlog Items for clarity and testability without changing intent.
  • Choose how to use AI to suggest splitting strategies for large backlog items into smaller value slices and validate them with the Developers.
  • Choose how to use AI to draft acceptance criteria that align to intent and support a shared Definition of Done.
  • Recognize when AI-generated acceptance criteria are unverifiable or too broad and revise them.
  • Select AI-assisted methods to identify duplicates, overlaps, and missing information in backlog items.
  • Choose how to use AI to surface dependencies, assumptions, and risks from backlog text without making up facts.
  • Choose how to use AI to draft lightweight user stories appropriate to the product context and stakeholder needs.
  • Recognize when AI output introduces unnecessary process and choose a leaner backlog-friendly alternative.
  • Choose how to use AI to prepare refinement questions that reduce uncertainty and improve ordering decisions.
  • Choose how to share AI-assisted backlog drafts with Developers to co-create understanding and avoid one-way handoffs.

Ordering, Value, and Stakeholder Alignment with AI

  • Use AI to propose Product Backlog ordering options based on value, risk, and dependencies while keeping the final decision PO-owned.
  • Recognize automation bias in prioritization and apply an evidence check before changing ordering.
  • Choose how to use AI to draft a value argument or business case for a backlog item and validate assumptions.
  • Choose how to translate stakeholder requests into outcomes and actionable backlog items using AI as support.
  • Select how to handle conflicting stakeholder priorities using AI outputs as input, not as authority.
  • Choose how to use AI to draft negotiation and communication options that maintain transparency and trust.
  • Use AI to identify hidden assumptions in stakeholder requests and decide what to clarify first.
  • Choose metrics or evidence that best supports an ordering decision (customer impact, cost of delay, risk).
  • Recognize when AI is optimizing proxy metrics and refocus on outcome-based measures of value.
  • Communicate product decisions and rationale clearly, including AI limitations and uncertainty when applicable.

Release Planning, Measurement, and Continuous Learning with AI

  • Choose how to use AI to draft release notes and stakeholder updates that are accurate and avoid overpromising.
  • Choose how to use AI to propose an outcome measurement plan and validate it with baselines and stakeholders.
  • Use AI to analyze post-release feedback for themes and decide what to do next based on evidence.
  • Recognize when an AI summary hides important caveats and request stronger evidence and nuance.
  • Choose monitoring and guardrail signals for AI-enabled features (quality, safety, bias, drift) and plan response actions.
  • Decide when to run experiments (prototype, A/B test) versus proceed to a broader rollout based on risk and uncertainty.
  • Choose how to use AI to synthesize lessons learned about product outcomes into actionable follow-ups.
  • Choose how to maintain a prompt library or product knowledge base with ownership, review, and deprecation.
  • Identify when to stop using AI in a decision workflow due to risk and switch to a human-led approach.
  • Choose how to collaborate with Developers and stakeholders on operational considerations for AI-enabled product features.

Prompting Fundamentals

  • Identify the goal of a prompt and rewrite a vague prompt into a clear objective.
  • Choose what context to include in a prompt (audience, constraints, definitions) to reduce ambiguity.
  • Select appropriate constraints (length, tone, structure) to get usable AI outputs.
  • Choose an output format (bullets, table, checklist, template) that matches a Scrum task.
  • Recognize when to specify assumptions versus ask the model to ask clarifying questions first.
  • Choose how to provide examples to guide style without over-constraining content.
  • Identify when role or persona prompting is helpful and when it can introduce bias.
  • Recognize prompt scope creep and choose to narrow to one decision or artifact.
  • Choose how to include non-goals in a prompt to avoid unwanted content.
  • Identify signals that a prompt is too leading and rewrite it to be more neutral.

Iterative Prompting and Dialogue

  • Choose an effective follow-up question to improve an initial AI draft.
  • Identify when to request multiple options and trade-offs instead of one answer.
  • Choose how to ask the AI to critique its own output for risks, assumptions, and gaps.
  • Recognize when the model needs missing facts and choose to provide them or gather them.
  • Choose how to ask for a step-by-step plan without turning the output into unnecessary process.
  • Identify when to use decomposition (break a problem into parts) to improve output quality.
  • Choose a technique to reduce hallucinations (ask for uncertainty, require verifiable claims).
  • Recognize when the AI is stuck in repetition and choose a reset strategy (reframe, new constraints).
  • Choose how to capture prompt, context, and output as reusable assets for the team.
  • Identify when to stop iterating and switch to human decision-making.

Prompts for Product Owner Contexts

  • Choose a prompt that drafts a clear Product Goal or outcome statement with explicit constraints and success criteria.
  • Choose a prompt that synthesizes user research notes into themes and a draft persona/jobs-to-be-done summary without over-claiming.
  • Choose a prompt that turns discovery insights into draft Product Backlog Items with a consistent structure.
  • Choose a prompt that identifies assumptions, risks, and clarifying questions for a backlog item without making up facts.
  • Choose a prompt that proposes backlog ordering options and tradeoffs based on value, risk, and dependencies.
  • Choose a prompt that drafts a stakeholder update or release communication that stays truthful and transparent.
  • Choose a prompt that drafts an experiment plan (hypothesis, metric, success criteria) for validating a product decision.
  • Choose a prompt that summarizes customer feedback into themes while preserving caveats and uncertainty.
  • Choose a prompt that prepares a refinement or stakeholder workshop agenda with desired outcomes and timeboxes.
  • Choose a prompt that generates alternative solution options while respecting non-goals, constraints, and responsible AI boundaries.

Validating and Refining AI Outputs

  • Choose a method to fact-check AI outputs against authoritative sources or internal documentation.
  • Recognize when an AI output includes fabricated citations and choose an appropriate response.
  • Choose a way to test AI-generated recommendations with small experiments before broad adoption.
  • Identify when AI-generated content conflicts with Scrum rules and choose to correct it to Scrum Guide intent.
  • Choose how to validate AI-generated acceptance criteria with the Product Owner and Developers.
  • Recognize when AI output lacks traceability to inputs and choose to request evidence or assumptions.
  • Choose how to detect subtle bias in AI-generated language and revise for inclusivity.
  • Identify when to run AI-generated content through security or privacy review before sharing externally.
  • Choose a technique to ensure AI summaries preserve key caveats and constraints.
  • Recognize when an AI output is too verbose for mobile reading and choose how to request a tighter version.

Workflow Integration and AI-Informed Behaviors

  • Choose how to introduce AI tools to a Scrum Team with clear guardrails and an opt-in culture.
  • Identify ways to avoid tool sprawl and standardize on a small set of approved AI workflows.
  • Choose how to build a shared prompt library and keep it updated through retrospectives.
  • Recognize when AI is being used to bypass collaboration and choose to bring conversation back to the team.
  • Choose how to measure whether AI use is improving outcomes (cycle time, quality, stakeholder satisfaction) rather than activity.
  • Identify opportunities for AI to reduce waste in knowledge work while maintaining quality.
  • Choose how to maintain transparency with stakeholders about AI-assisted deliverables.
  • Recognize AI-related challenges (skills gap, trust issues) and choose training and coaching actions.
  • Choose how to use AI to improve outcomes ethically (faster learning, better decisions) without cutting corners.
  • Identify when AI adoption decisions should be revisited based on new risks, policies, or results.

Tip: Drill one section at a time, then mix topics to force transfer.