PSM-AI Syllabus — Learning Objectives & Topic Map

Blueprint-aligned PSM-AI learning objectives with quick links to targeted practice by topic.

Use this syllabus as your PSM-AI coverage checklist. Practice immediately after each section.

What’s covered

AI Theory and Primer (25%)

AI Fundamentals and Terminology

  • Define artificial intelligence (AI) and distinguish it from automation and rules-based systems.
  • Distinguish machine learning, deep learning, and generative AI at a conceptual level.
  • Identify examples of narrow AI versus general intelligence and set realistic expectations for current AI tools.
  • Explain what a large language model (LLM) is and what it is designed to do.
  • Match common AI terms (model, training data, inference, parameters) to their meanings at a practical level.
  • Differentiate classification, regression, clustering, and generation tasks and identify which an AI system is performing.
  • Identify typical inputs and outputs for text, image, and code generation systems and when each is appropriate.
  • Explain why many AI outputs are probabilistic and how that affects repeatability.
  • Describe what ground truth means when evaluating AI outputs and why it matters.
  • Select an AI capability (summarize, translate, brainstorm, categorize) that fits a described need without overreach.

How Generative AI Works (High Level)

  • Describe at a high level how an LLM predicts the next token to generate text.
  • Explain what agentic AI is at a high level and how it differs from simple chat-based assistance.
  • Explain what a context window is and how it limits what the model can consider.
  • Distinguish training from inference and identify what changes (and what does not) at runtime.
  • Recognize why an LLM may hallucinate and identify conditions that make it more likely (missing context, forced specificity).
  • Explain the idea of embeddings and semantic similarity in simple terms.
  • Identify when retrieval-augmented generation (RAG) is useful versus relying on the model alone.
  • Distinguish fine-tuning from prompting and choose the right approach for a stated goal.
  • Explain temperature/creativity settings in plain language and select a suitable setting for accuracy-focused tasks.
  • Describe why models can be sensitive to phrasing and information order in prompts.

AI Capabilities and Limitations

  • Identify tasks where AI excels (drafting, summarizing, pattern spotting) versus tasks where it is risky without oversight (factual verification, policy interpretation).
  • Recognize common AI failure modes: hallucination, bias, stale knowledge, overconfidence, and non-determinism.
  • Evaluate whether an AI output is trustworthy based on evidence, specificity, and verifiability.
  • Choose an appropriate human review step for AI-generated content based on risk level.
  • Distinguish correlation from causation in AI-generated insights and avoid over-interpreting patterns.
  • Identify when an AI suggestion is too generic to be actionable and choose how to request specificity.
  • Recognize when a request is underspecified for AI and choose what additional context is needed.
  • Explain why AI may reflect biases from training data and how bias can appear in outputs.
  • Identify scenarios where AI may amplify groupthink and choose a mitigation (generate alternatives, bring diverse perspectives).
  • Select a safe fallback when AI is uncertain (ask clarifying questions, provide options, defer to subject matter experts).

AI Impact on Work and Teams

  • Identify opportunities AI creates in knowledge work (speed, consistency, ideation) and trade-offs (quality, trust, ethics).
  • Recognize signs of automation bias (over-trusting AI) and choose an appropriate countermeasure.
  • Explain why transparency about AI use matters for trust and collaboration.
  • Select a task for AI augmentation versus a task that should remain human-led in a Scrum context.
  • Recognize when AI use might reduce learning within the team and choose an approach that preserves skill growth.
  • Identify when AI can reduce cognitive load (summaries, checklists) and when it can create noise.
  • Distinguish productivity gains from value delivery and choose metrics that reflect outcomes, not just speed.
  • Identify potential impacts of AI on team dynamics (confidence gaps, dependency) and choose a mitigation.
  • Explain what human-in-the-loop means and apply it to a scenario with AI-generated recommendations.
  • Identify early adoption risks (tool sprawl, inconsistent prompts, data leakage) and select a mitigation strategy.

Selecting and Using AI Responsibly (Primer)

  • Identify when a prompt includes sensitive information and choose how to redact or avoid sharing it.
  • Recognize when a request asks for private or proprietary data and choose an appropriate response.
  • Choose an approach to validate AI-generated statements (cross-check sources, test against data, peer review).
  • Select an output format to reduce ambiguity (bullet list, table, short paragraph) given a task.
  • Recognize when AI output should be labeled as a draft and when it can be used as final with review.
  • Distinguish using AI for ideation from using AI for decision-making and choose appropriate boundaries.
  • Choose a way to capture assumptions and constraints so AI outputs remain aligned with reality.
  • Recognize when AI is being asked to replace professional judgment and choose a safer framing (suggest options).
  • Identify when to stop iterating with AI due to diminishing returns and shift to human work.
  • Explain why context matters more than clever wording and select the minimal necessary context for a prompt.

AI Security and Ethics (25%)

Responsible AI Principles

  • Define responsible AI principles (accountability, transparency, fairness, safety) and match them to examples.
  • Identify ethical risks in AI use (bias, manipulation, exclusion) in workplace scenarios.
  • Choose an action that increases transparency about AI usage without oversharing sensitive details.
  • Recognize when AI use could create unfair advantage or disadvantage among team members and choose a mitigation.
  • Distinguish explainability from correctness and choose what to communicate about AI decisions.
  • Identify when consent is needed for using data in AI prompts and choose an appropriate approach.
  • Recognize when an AI-generated suggestion could harm psychological safety and choose a corrective action.
  • Choose a method to ensure inclusivity in AI-assisted communications (neutral language, accessibility).
  • Identify conflicts of interest or misuse (for example, generating performance reviews) and choose ethical boundaries.
  • Select appropriate escalation paths when AI use raises ethical concerns (manager, security, policy owner).

Privacy and Confidentiality

  • Identify types of sensitive data (PII, PHI, credentials, customer data) and choose safe handling for AI prompts.
  • Choose a safe way to summarize a confidential document using AI without exposing sensitive content.
  • Recognize when anonymization is insufficient due to re-identification risk and choose a safer approach.
  • Identify when to use synthetic examples instead of real data in AI prompts.
  • Select steps to prevent secrets leakage (API keys, passwords) when using AI tools.
  • Recognize retention risks (chat history, logs) and choose a safe usage pattern.
  • Choose appropriate access control for sharing prompts and AI outputs within a team.
  • Recognize when AI output should not be stored in a system of record without review.
  • Identify privacy implications of sending data to third-party services and choose a compliant action.
  • Choose a policy-aligned response when asked to input customer data into an AI assistant.
  • Distinguish public domain, copyrighted, and proprietary content in the context of AI-assisted creation.
  • Identify risks of plagiarism or license violations in AI-generated text and choose mitigations.
  • Choose when to cite sources or provide attribution for AI-assisted outputs in professional settings.
  • Recognize that AI may reproduce memorized text and choose how to avoid copying verbatim.
  • Identify when AI-generated content could create IP ownership ambiguity and choose a safe policy-friendly approach.
  • Select an approach for using AI to rephrase content without changing meaning while avoiding infringement.
  • Recognize when generating code or templates may implicate licensing and choose a mitigation (review, use approved sources).
  • Choose a safe way to include AI-generated diagrams or frameworks in internal materials (label as draft, verify).
  • Identify when AI-generated content could violate a non-disclosure agreement (NDA) and choose an alternative.
  • Choose how to handle AI-generated suggestions that conflict with organizational standards or legal guidance.

Security Threats and Mitigations

  • Define prompt injection and identify it in a scenario involving untrusted content.
  • Recognize data exfiltration attempts through prompts and choose a safe response.
  • Identify social engineering risks when AI outputs appear authoritative and choose verification steps.
  • Choose safe practices when using AI with external inputs (documents, emails) to reduce injection risk.
  • Recognize when an AI tool is requesting excessive permissions and choose a safer configuration.
  • Identify hallucinated security advice and choose to consult authoritative security guidance.
  • Choose an approach to sanitize and validate AI-generated content before publishing or sending to customers.
  • Recognize potential vulnerabilities introduced by AI-generated code snippets and choose review and testing steps.
  • Identify when AI could be used to generate phishing content and choose preventative training or controls.
  • Choose how to report and respond to suspected AI-related security incidents.

Governance, Risk Management, and Compliance

  • Explain why AI governance is needed and identify typical guardrails (approved tools, data rules, review).
  • Identify when AI use must follow regulatory requirements (privacy, financial, health) and choose to consult policy.
  • Choose a risk-based approach to AI usage where high-impact tasks require stronger review and controls.
  • Recognize when auditability matters for AI-assisted decisions and choose what documentation to keep.
  • Identify when to involve legal, security, or privacy stakeholders in AI adoption decisions.
  • Choose an appropriate human accountability statement for AI-assisted outputs.
  • Recognize when AI use conflicts with organizational ethics and choose an escalation or stop action.
  • Identify metrics to monitor AI usage quality and risk (error rates, leakage incidents, user feedback).
  • Choose governance for prompt libraries (ownership, review cycles, deprecation) that reduces risk.
  • Identify when AI policy should be updated based on new risks or incidents and choose next steps.

AI for Scrum Masters (25%)

Scrum Master Accountabilities with AI Support

  • Identify Scrum Master accountabilities that can be augmented by AI (coaching, facilitation preparation) without delegating responsibility.
  • Choose how to use AI to craft coaching questions that encourage self-management rather than prescribe answers.
  • Recognize when AI advice could undermine team ownership and choose a more empowering approach.
  • Select AI-assisted ways to surface impediments from notes or logs while protecting confidentiality.
  • Choose how to use AI to prepare for difficult conversations while preserving empathy and context.
  • Identify when AI can help with conflict resolution planning versus when to involve a human mediator.
  • Choose how to use AI to create learning materials about Scrum concepts aligned to the Scrum Guide.
  • Recognize when AI-generated policies add unnecessary process and choose to keep Scrum lightweight.
  • Choose an approach to use AI for timeboxing and agenda creation that supports empiricism.
  • Identify risks of using AI to evaluate individuals and choose a team-focused alternative.

Supporting Scrum Events with AI

  • Choose a safe way to use AI to draft a Sprint Planning agenda based on a clear Sprint Goal and constraints.
  • Identify how AI can help prepare Product Backlog refinement prompts and questions.
  • Choose how to use AI during a Daily Scrum without turning it into a status meeting.
  • Recognize when AI-generated action items from a Retrospective need human validation and prioritization.
  • Select AI-assisted techniques to synthesize stakeholder feedback for Sprint Review while preserving nuance.
  • Choose how to use AI to propose Retrospective activities appropriate to the team's situation.
  • Identify when AI-generated best practices conflict with Scrum Guide intent and choose a correction.
  • Choose how to use AI to draft communications after Sprint Review that remain truthful and transparent.
  • Recognize when AI suggestions for meeting frequency or timeboxes are off-Scrum and choose Scrum-aligned guidance.
  • Select how to capture and summarize decisions from Scrum events without exposing sensitive details.

Supporting Artifacts and Backlog Management with AI

  • Choose AI-assisted approaches to rewrite Product Backlog Items for clarity without changing intent.
  • Identify how AI can help split a large backlog item into smaller items while keeping value focus.
  • Choose how to use AI to propose acceptance criteria consistent with a Definition of Done.
  • Recognize when AI-generated estimates are unreliable and choose a team-based estimation approach.
  • Select AI-assisted methods to identify duplicate backlog items or unclear requirements.
  • Choose how to use AI to generate test ideas that support built-in quality.
  • Identify when AI suggestions could create scope creep and choose a focus-preserving alternative.
  • Choose how AI can help draft a Definition of Done checklist while ensuring the team owns it.
  • Recognize when AI-generated user stories introduce roles outside Scrum and adjust to Scrum Team context.
  • Choose how to use AI to analyze a Sprint Backlog for WIP overload signals and propose an appropriate response.

Improving Collaboration and Communication with AI

  • Choose AI-assisted ways to translate or simplify communications for diverse stakeholders while preserving meaning.
  • Identify how AI can help craft stakeholder messages that are transparent and avoid overpromising.
  • Choose how to use AI to prepare for workshops (objectives, activities, materials) with clear outcomes.
  • Recognize risks of using AI to generate persuasive messaging and choose ethical communication boundaries.
  • Select AI-assisted methods to summarize meeting notes and highlight decisions and action items accurately.
  • Choose how to use AI to identify misalignments in stakeholder expectations and propose clarifying questions.
  • Recognize when AI can amplify misunderstandings and choose to verify through direct conversation.
  • Choose AI-assisted collaboration techniques that enhance psychological safety rather than replace human interaction.
  • Identify when AI-generated feedback phrasing could be demotivating and choose a better framing.
  • Choose how to use AI to create a shared glossary or team agreements while ensuring team buy-in.

Continuous Improvement, Metrics, and Experimentation with AI

  • Choose how to use AI to propose improvement experiments grounded in observed problems and evidence.
  • Identify metrics appropriate for Scrum teams (value, quality, predictability) and avoid vanity metrics.
  • Choose how to use AI to analyze qualitative feedback for themes without misrepresenting minority viewpoints.
  • Recognize when AI-generated trend analysis is unsupported and choose to validate with data.
  • Select AI-assisted ways to generate hypotheses about flow bottlenecks while keeping causality cautious.
  • Choose how to use AI to draft a simple improvement plan with timeboxes and success measures.
  • Recognize when AI is suggesting optimization that harms empiricism and choose a better approach.
  • Choose an approach to use AI for backlog health checks (clarity, ordering, dependencies) while keeping Product Owner accountability.
  • Identify when AI may propose changes that reduce built-in quality and choose to protect quality.
  • Choose how to use AI to maintain a learning backlog or prompt library for continuous improvement.

Effective AI Prompting (25%)

Prompting Fundamentals

  • Identify the goal of a prompt and rewrite a vague prompt into a clear objective.
  • Choose what context to include in a prompt (audience, constraints, definitions) to reduce ambiguity.
  • Select appropriate constraints (length, tone, structure) to get usable AI outputs.
  • Choose an output format (bullets, table, checklist, template) that matches a Scrum task.
  • Recognize when to specify assumptions versus ask the model to ask clarifying questions first.
  • Choose how to provide examples to guide style without over-constraining content.
  • Identify when role or persona prompting is helpful and when it can introduce bias.
  • Recognize prompt scope creep and choose to narrow to one decision or artifact.
  • Choose how to include non-goals in a prompt to avoid unwanted content.
  • Identify signals that a prompt is too leading and rewrite it to be more neutral.

Iterative Prompting and Dialogue

  • Choose an effective follow-up question to improve an initial AI draft.
  • Identify when to request multiple options and trade-offs instead of one answer.
  • Choose how to ask the AI to critique its own output for risks, assumptions, and gaps.
  • Recognize when the model needs missing facts and choose to provide them or gather them.
  • Choose how to ask for a step-by-step plan without turning the output into unnecessary process.
  • Identify when to use decomposition (break a problem into parts) to improve output quality.
  • Choose a technique to reduce hallucinations (ask for uncertainty, require verifiable claims).
  • Recognize when the AI is stuck in repetition and choose a reset strategy (reframe, new constraints).
  • Choose how to capture prompt, context, and output as reusable assets for the team.
  • Identify when to stop iterating and switch to human decision-making.

Prompts for Scrum Contexts

  • Choose a prompt that helps generate Sprint Planning questions aligned to a Sprint Goal.
  • Choose a prompt to generate Retrospective questions that focus on one improvement theme.
  • Choose a prompt to summarize a Sprint Review into stakeholder-friendly updates.
  • Choose a prompt to identify risks and assumptions in a Product Backlog Item without making up facts.
  • Choose a prompt to propose user story splits based on value slices and dependencies.
  • Choose a prompt to generate coaching questions for a Scrum Master conversation.
  • Choose a prompt to draft a Definition of Done that the team can adapt.
  • Choose a prompt to convert messy notes into clear action items with owners and timeboxes.
  • Choose a prompt to facilitate conflict resolution by exploring perspectives and needs.
  • Choose a prompt to prepare a workshop agenda with timeboxes and desired outcomes.

Validating and Refining AI Outputs

  • Choose a method to fact-check AI outputs against authoritative sources or internal documentation.
  • Recognize when an AI output includes fabricated citations and choose an appropriate response.
  • Choose a way to test AI-generated recommendations with small experiments before broad adoption.
  • Identify when AI-generated content conflicts with Scrum rules and choose to correct it to Scrum Guide intent.
  • Choose how to validate AI-generated acceptance criteria with the Product Owner and Developers.
  • Recognize when AI output lacks traceability to inputs and choose to request evidence or assumptions.
  • Choose how to detect subtle bias in AI-generated language and revise for inclusivity.
  • Identify when to run AI-generated content through security or privacy review before sharing externally.
  • Choose a technique to ensure AI summaries preserve key caveats and constraints.
  • Recognize when an AI output is too verbose for mobile reading and choose how to request a tighter version.

Workflow Integration and AI-Informed Behaviors

  • Choose how to introduce AI tools to a Scrum Team with clear guardrails and an opt-in culture.
  • Identify ways to avoid tool sprawl and standardize on a small set of approved AI workflows.
  • Choose how to build a shared prompt library and keep it updated through retrospectives.
  • Recognize when AI is being used to bypass collaboration and choose to bring conversation back to the team.
  • Choose how to measure whether AI use is improving outcomes (cycle time, quality, stakeholder satisfaction) rather than activity.
  • Identify opportunities for AI to reduce waste in knowledge work while maintaining quality.
  • Choose how to maintain transparency with stakeholders about AI-assisted deliverables.
  • Recognize AI-related challenges (skills gap, trust issues) and choose training and coaching actions.
  • Choose how to use AI to improve outcomes ethically (faster learning, better decisions) without cutting corners.
  • Identify when AI adoption decisions should be revisited based on new risks, policies, or results.

Tip: Drill one section at a time, then mix topics to force transfer.