Scrum.org PSM-AI Practice Test

Practice Scrum.org PSM-AI with free sample questions, timed mock exams, and detailed explanations for Scrum roles, events, and decision-making.

PSM-AI is Scrum.org’s AI Essentials assessment for Scrum Masters who need to use AI responsibly without weakening Scrum fundamentals. If you are searching for PSM-AI sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same account.

Choose PSM-AI when your decisions live inside Scrum Master work: facilitation, coaching, event support, prompting, and ethical AI use around the Scrum Team. If your role is Product Owner, compare PSPO-AI . If you need a broader AI initiative-management route, compare PMI-CPMAI or AIPM .

Interactive Practice Center

Start a practice session for Scrum.org Professional Scrum Master - AI Essentials (PSM-AI) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.

What this PSM-AI practice page gives you

  • A direct route into the PM Mastery simulator for PSM-AI.
  • Topic drills, mixed sets, and timed practice across AI foundations, ethics, Scrum Master use cases, and prompting.
  • Detailed explanations that show why the safest and most useful answer is right.
  • 24 on-page sample questions plus access to a larger PM Mastery library with 2,400+ PSM-AI practice questions.
  • A clear free-preview path before you subscribe.
  • The same account across web and mobile.

PSM-AI exam snapshot

  • Vendor: Scrum.org
  • Official exam name: Scrum.org Professional Scrum Master - AI Essentials (PSM-AI)
  • Exam code: PSM-AI
  • Questions: 20
  • Time limit: 45 minutes
  • Pass mark: 85%

PSM-AI rewards answers that combine practical AI use with Scrum-friendly leadership, ethical judgment, strong prompt design, and clear awareness of security and governance risks.

Choose the right AI + Scrum route

  • PSM-AI : best for Scrum Masters, agile coaches, facilitation, and team-level AI use.
  • PSPO-AI : best for Product Owners, backlog quality, discovery, and value decisions.
  • PMI-CPMAI : best when the work is a full AI initiative with data, evaluation, governance, and operations.
  • AIPM : best when you want broader AI-driven project-management coverage beyond Scrum role questions.

Topic coverage for PSM-AI practice

TopicWeightEstimated questions
AI Theory and Primer25%5
AI Security and Ethics25%5
AI for Scrum Masters25%5
Effective AI Prompting25%5

How to use the PSM-AI simulator efficiently

  1. Start with one topic at a time and run a short focused drill.
  2. Review every miss until you can explain the ethical, security, or Scrum leadership logic behind the best answer.
  3. Move into mixed sets once you can switch between prompting, governance, and practical Scrum Master use cases smoothly.
  4. Finish with timed runs to build confidence across the full 20-question assessment.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the question style and explanation depth.
  • Premium: the full PSM-AI practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

24 PSM-AI sample questions with detailed explanations

These sample questions include the same mix of single-answer and multiple-response items you should practice for PSM-AI. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.

Question 1

Topic: AI Security and Ethics

A Scrum Team wants to use an AI assistant to propose edge-case test ideas based on recent production defect reports. The defect reports include customer names, email addresses, and account IDs, but the team only needs the AI to understand the data shape and typical error patterns.

Which practice best fits this situation?

  • A. Use the real defect reports, but add a note that the AI output must be reviewed by humans
  • B. Replace the real defect details with synthetic examples that preserve the structure and patterns
  • C. Send the real defect reports and rely on the AI provider’s terms to keep them confidential
  • D. Use the real defect reports and disclose to stakeholders that AI was used

Best answer: B

Explanation: Use synthetic examples when real data contains sensitive information and the prompt’s goal can be achieved without it. Here, the AI needs only the data format and failure patterns, not customer identifiers. Substituting synthetic, representative samples supports privacy and confidentiality while still enabling effective prompting.

The core principle is minimizing sensitive data exposure in AI prompts. If the task can be accomplished with representative structure (fields, ranges, formats, and typical failures) rather than actual customer content, prefer synthetic examples (or heavily generalized samples) to avoid sharing personally identifiable or confidential information.

A practical approach is:

  • Keep the same schema and edge-case patterns (e.g., missing fields, invalid formats).
  • Replace identifiers and content with invented values.
  • Include only what the AI needs to generate useful test ideas.

Transparency, human review, and accountability still matter, but they do not mitigate the privacy risk of sending unnecessary real customer data.


Question 2

Topic: AI Theory and Primer

A Scrum Master has a 6-page transcript from a Sprint Retrospective. The Scrum Team wants to share a brief recap with management that includes key themes, decisions made, and action items. They explicitly want to avoid adding new ideas or interpretations, and the team will review the output before sending it.

Which AI capability best fits this need without overreach?

  • A. Summarize the transcript into key themes, decisions, and action items
  • B. Categorize each comment into predefined sentiment and topic labels
  • C. Translate the transcript into management’s preferred language
  • D. Brainstorm additional improvement experiments based on the transcript

Best answer: A

Explanation: The task is to condense existing information into a short recap while avoiding new ideas or interpretation. Summarization is designed for this and can be validated by having the team review the output before sharing it. This aligns with accountable, empiricism-supporting use of AI.

Choose the capability that matches the outcome requested and minimizes the risk of the AI “doing more” than asked. Here, the team needs a shorter version of what already happened: themes, decisions, and action items derived from the transcript. That is a summarization task.

To keep it responsible and empiricism-aligned:

  • Ask for a faithful summary and explicitly forbid new recommendations.
  • Treat the output as a draft.
  • Validate against the transcript and with the Scrum Team before sharing.

Other capabilities may be useful in different situations, but they do not directly produce the requested recap and can increase the chance of overreach.


Question 3

Topic: Effective AI Prompting

A Scrum Master asks an AI assistant in one message: “Turn the top 12 Product Backlog items into refined user stories with acceptance criteria, split into tasks, identify dependencies/risks, propose a Sprint Goal, and draft a Sprint plan.” They include the Product Goal, Definition of Done, and the Developers’ capacity.

The AI response is generic, mixes refinement and planning details, and contains contradictory acceptance criteria. The Developers argue about what to trust and what to discard.

What is the most likely underlying cause of this situation?

  • A. The team accepted AI output without critical review.
  • B. Untrusted text in the prompt caused prompt injection.
  • C. The prompt is overloaded and needs decomposition into smaller steps.
  • D. The AI lacked key product context and constraints.

Best answer: C

Explanation: This is primarily a decomposition problem: the prompt asks for several distinct activities (refinement, tasking, risk analysis, Sprint planning) in one pass. Even with good context, the model can produce blended and internally inconsistent results when the request is not staged. Decomposing into smaller prompts supports iteration and validation of each part before moving on.

Decomposition improves AI output quality when a request contains multiple goals, mixed levels of detail, or needs internal consistency across artifacts. In the scenario, refinement and Sprint planning are different outcomes with different constraints, so a single “do everything” prompt encourages generic filler and contradictions.

A better approach is to split the work into a short dialogue, for example:

  • Refine one backlog item at a time (story + acceptance criteria)
  • Validate with the team and adjust
  • Then (separately) derive tasks/dependencies for selected items
  • Finally, propose a Sprint Goal and a draft plan from the chosen scope

This keeps each step reviewable and reduces confusion about what is reliable before the team uses it in decisions.


Question 4

Topic: AI Security and Ethics

A Scrum Master uses a generative AI tool to draft a Sprint Review invitation that will be emailed to a diverse stakeholder group, including people who use screen readers. The Scrum Master wants to ensure the message is inclusive (neutral language and accessible formatting) before sending it.

Exhibit: AI draft (excerpt)

Hey guys-join us for the Sprint Review!
We'll show the finished work and you can jump in anytime.
See the key results in the image below.
[embedded image with no description]

What is the best evidence/validation step to take before trusting and sending this AI-generated invitation?

  • A. Ask the AI to confirm the message is inclusive and accessible.
  • B. Assume it is inclusive because it sounds friendly and concise.
  • C. Validate it against an authoritative inclusive-language and accessibility checklist/style guide, then revise accordingly.
  • D. Send it as-is and rely on stakeholder feedback to fix issues later.

Best answer: C

Explanation: The most trustworthy way to confirm inclusivity is to validate the draft against explicit, authoritative criteria for neutral language and accessibility. A checklist or style guide turns a subjective judgment into a repeatable verification step (e.g., replacing gendered terms and adding alt text). This supports responsible AI use while keeping human accountability for the final communication.

Responsible AI-assisted communication requires human validation, especially for inclusivity and accessibility. A generative model can produce friendly text that still contains non-neutral terms (e.g., “guys”) or accessibility gaps (e.g., images without alt text). The strongest validation is to compare the draft to an authoritative, predefined standard-such as your organization’s inclusive-language guide and accessibility checklist-then make the needed edits.

A practical validation approach is:

  • Check wording for neutrality and clarity (plain language, avoid idioms).
  • Ensure accessible structure (descriptive links, headings/spacing as applicable).
  • Add text alternatives for non-text content (alt text or a short summary).

Self-assurance from the AI or “it sounds fine” are weak signals and don’t provide evidence that the message meets accessibility needs.


Question 5

Topic: AI Security and Ethics

A Developer shares an AI-generated suggestion in Sprint Planning: “It’s safe to commit API keys to a private repo if the repo is access-controlled.” You want to use an AI assistant to help you draft a message to the Scrum Team that corrects the risk without shaming anyone.

Constraints: audience is the Scrum Team; tone is supportive and concise; output must be (1) a short Slack message and (2) a checklist. Do not include proprietary system names, repo links, or customer data. The result must encourage validation against authoritative security guidance and human review.

Which prompt is best?

  • A. Draft a supportive Slack message plus a checklist discouraging committing secrets (e.g., API keys) to source control; include common risks and safer alternatives, explicitly note that this is not definitive security advice, and add a short “verify with” section pointing to our org’s security policy and authoritative references (e.g., OWASP/NIST) with questions to confirm with InfoSec; flag any assumptions and areas needing review; avoid proprietary details.
  • B. Write a definitive policy stating whether committing API keys to a private repo is safe; be confident and do not include caveats so the team will follow it immediately.
  • C. Explain why the AI suggestion is wrong and provide the single best solution; assume the advice is hallucinated and skip consulting security specialists to save time.
  • D. Using our actual repository structure and current access-control settings, generate step-by-step instructions for safely storing API keys in the repo and rotating them; include exact commands and configuration examples.

Best answer: A

Explanation: Hallucinated security guidance is a common risk, so the safest use of AI is to have it produce a draft that is explicitly subject to verification. The best prompt sets confidentiality constraints, asks for assumptions and uncertainties, and directs the team to validate against organizational policy and authoritative sources rather than treating the AI output as final security direction.

The core issue is that AI can produce plausible-sounding but incorrect security advice. A Scrum Master can still use AI effectively by constraining the request to a draft communication and making validation explicit: avoid sharing sensitive context, ask the AI to surface assumptions and uncertainty, and require a “verify with” section that points to the organization’s security policy and recognized guidance (for example OWASP or NIST) and encourages checking with security experts.

This keeps accountability with humans, reduces the chance of acting on hallucinations, and supports empiricism: treat the AI output as a hypothesis to be validated, not as an authoritative decision. The closest traps are prompts that demand certainty or that bypass human/security review.


Question 6

Topic: AI Security and Ethics

A Scrum Team wants to start using a generative AI service to draft Product Backlog Items by summarizing recent customer support tickets. They plan to “just try it” during the next Sprint and paste ticket text into the AI to get suggested stories and acceptance criteria. As Scrum Master, what should you ask to clarify first before deciding whether to involve legal, security, or privacy stakeholders?

  • A. How many Product Backlog Items per day does the Product Owner want generated?
  • B. What types of data from the tickets will be shared, and how is it classified/allowed to be processed?
  • C. Which prompt template will produce the most consistent story format?
  • D. Which Sprint event is the best place to demonstrate the AI output to stakeholders?

Best answer: B

Explanation: The deciding factor for involving legal, security, or privacy is whether sensitive or regulated information will be used and where it will go. Clarifying the data types and classification (e.g., personal data, confidential business data) and any processing constraints establishes the risk level and triggers the right stakeholders. Without that, you cannot make a responsible adoption decision.

In AI adoption, the first clarification should reduce the largest governance risk: exposing data in ways that violate policy, contracts, or privacy/security obligations. Asking what ticket data will be shared (and its classification and handling constraints) reveals whether it includes personal data, confidential customer information, or other restricted content, and whether it can be processed by an external service. That information determines if you must involve privacy/security/legal for guidance on permitted use, safeguards (e.g., redaction, approved environments), and transparency requirements. Questions about formatting, volume, or meeting logistics may matter later, but they do not establish whether the experiment is allowed or safe to run.


Question 7

Topic: AI Security and Ethics

A Product Owner used an AI assistant to propose an initial ordering of the Product Backlog based on recent production incidents and customer feedback themes. Stakeholders ask the Scrum Master how to interpret the AI’s “reasons” for the suggested ordering.

Which statement is INCORRECT and should be avoided?

  • A. State that the Product Owner remains accountable for ordering.
  • B. Communicate key factors and limitations of the AI output.
  • C. Treat the AI rationale as a hypothesis to validate.
  • D. The AI gave reasons, so the ordering is correct.

Best answer: D

Explanation: Explainability is about how an AI arrived at an output, not whether that output is true or appropriate. A responsible Scrum Master helps stakeholders understand the AI’s rationale as input that still needs human judgment and validation. They also maintain transparency about limitations and keep accountability with the humans in Scrum roles.

Explainability (or an AI “reason”) can increase transparency and make it easier to challenge an output, but it does not prove correctness. In a Scrum context, AI suggestions are decision support: the Scrum Team should validate them against real evidence, context, and desired outcomes, and communicate that validation has occurred (or will occur).

What to communicate typically includes:

  • The output was AI-assisted and who made the final decision.
  • The main factors the AI considered and notable limitations/uncertainty.
  • How the suggestion will be validated (data checks, domain review, experiments).

The key takeaway is to avoid equating a plausible explanation with a correct or safe decision.


Question 8

Topic: Effective AI Prompting

A Scrum Master wants to use AI to draft a 90-minute Sprint Retrospective workshop agenda for a 7-person Scrum Team (remote). The team’s focus is reducing cycle time and improving WIP discipline, and the Scrum Master wants the agenda to result in two actionable improvement experiments and an update to the team working agreement.

Which prompt is best to produce an agenda with clear timeboxes and desired outcomes while also supporting validation of the AI output before it is used with the team?

  • A. Create a fun retrospective agenda to help the team improve and keep it engaging.
  • B. Write the final retrospective invitation message I can send to the team with the agenda included.
  • C. List five retrospective activities that can reduce cycle time and explain each one.
  • D. Draft a 90-minute remote Sprint Retrospective agenda for 7 people focused on reducing cycle time and improving WIP discipline; for each section include a timebox and a desired outcome, ensure the flow supports creating two improvement experiments and one working-agreement update, and end with a short checklist of assumptions/risks plus what I should confirm with the team before using it.

Best answer: D

Explanation: The best prompt provides clear context (event, duration, participants, focus) and explicit constraints (timeboxes and desired outcomes tied to specific results). It also requests a lightweight validation aid, such as assumptions/risks and what to confirm with the team, so the Scrum Master can review and adapt the agenda responsibly before use.

Effective prompting for a Scrum workshop agenda means specifying the event, constraints, and the outcomes you want to enable, then adding a built-in way to validate the AI’s draft. In this scenario, the Scrum Master needs a 90-minute remote Sprint Retrospective agenda that produces concrete results (two experiments and a working-agreement update).

A strong prompt typically includes:

  • Audience and context (Scrum Team size, remote/in-person, focus area)
  • Constraints (total time, required outputs, desired facilitation flow)
  • Output format (timeboxed sections with desired outcomes)
  • Validation support (assumptions/risks and what to confirm with the team)

The key is to keep humans accountable: use AI to draft, then verify fit, feasibility, and language with the Scrum Team before running the session.


Question 9

Topic: AI Theory and Primer

During a Sprint Retrospective, Developers debate whether an LLM “knows” the product and can be trusted to write a Sprint summary. You already ensured no confidential data will be entered and you plan to use AI only as assistance. What is the best next step to set an accurate expectation of how the LLM generates its text?

  • A. Explain that it understands intent like a human and reasons to the correct answer
  • B. Explain that it repeatedly predicts the most likely next token from context
  • C. Explain that it searches its training data to find and copy matching sentences
  • D. Explain that it applies fixed grammar rules and a dictionary to build sentences

Best answer: B

Explanation: An LLM generates text by predicting what token should come next given the preceding tokens, based on patterns learned during training. It then appends that token and repeats the process, which can produce fluent text without guaranteeing factual correctness. Sharing this mental model helps the team validate outputs instead of assuming understanding or recall.

At a high level, an LLM is a probabilistic text generator. Given the existing context (the prompt plus any text already generated), it computes a probability distribution over possible next tokens (tokens are chunks of text such as words or subwords). It selects one token (often the highest-probability token or a sampled token), appends it to the context, and repeats this step many times to produce a response. Because this is next-token prediction, the model can sound confident and coherent while still being wrong, so human validation remains necessary even when no sensitive data is used. The key distinction is that it is not “thinking like a person” or looking up truth; it is predicting plausible continuations.


Question 10

Topic: AI Theory and Primer

A Scrum Master uses a generative AI tool to draft a Sprint Review summary, including the statement: “Cycle time improved 18% this Sprint due to the new deployment pipeline.” The Scrum Master sends it to stakeholders without cross-checking Sprint data, verifying any sources, or getting a quick peer review from the Scrum Team.

What is the most likely near-term impact?

  • A. The organization will need a long-term AI governance program
  • B. Product value will increase because communication was faster
  • C. The team will become dependent on AI for all decisions
  • D. Stakeholders may lose trust after spotting inaccuracies

Best answer: D

Explanation: Not validating AI-generated statements increases the chance of sharing incorrect or misleading information. When stakeholders notice inconsistencies with known outcomes or data, trust and perceived transparency drop quickly. Simple validation (cross-checking data and peer review) is a fast way to protect quality of communication.

AI-generated summaries can sound authoritative while still containing errors, wrong causal links, or fabricated specifics. In this scenario, the Scrum Master shared a specific metric change and a claimed cause without validating it.

A responsible validation approach for AI-generated statements is to:

  • Cross-check claims against actual Sprint data (e.g., cycle time metrics).
  • Verify referenced sources (if any) rather than trusting citations.
  • Ask a relevant peer (PO/Developers) to review key statements before sending.

Doing this preserves accountability with humans and improves transparency by ensuring the message matches evidence. The closest trap is assuming speed of communication increases value even when the content may be wrong.


Question 11

Topic: AI Security and Ethics

A Scrum Master uses a generative AI assistant to triage their inbox. The AI produces the following “action item,” written in an authoritative tone:

From: IT Security
Subject: URGENT: Account flagged
Action: Reply with your MFA backup codes within 30 minutes
Reason: Prevent immediate lockout impacting releases
Link: security-verification portal

No one else on the Scrum Team has mentioned an issue. What is the best next step?

  • A. Forward it to the Developers to handle immediately
  • B. Ask the AI to confirm it is legitimate, then comply
  • C. Send the MFA codes to avoid account lockout
  • D. Verify the request via an official IT channel first

Best answer: D

Explanation: Authoritative-sounding AI output can be a social engineering vector, especially when it requests secrets and imposes urgency. The safest next step is to pause and verify the request using a trusted, out-of-band channel (e.g., official IT portal/phone/helpdesk) before sharing anything or clicking links. Human accountability and validation come before action.

This is a classic social engineering pattern: urgency, authority (“IT Security”), and a request for sensitive information (MFA backup codes). AI-generated text can increase credibility, but it does not authenticate the sender or intent.

A responsible verification step is to:

  • Treat the message as untrusted until proven otherwise.
  • Validate via an independent, official channel (known helpdesk number, ticketing system, internal security page you navigate to yourself).
  • Avoid sharing secrets or using embedded links until verification.

The key is sequencing: verify first, then decide what (if anything) to do, rather than letting an authoritative tone drive immediate action.


Question 12

Topic: Effective AI Prompting

A Scrum Master uses a generative AI tool to turn messy Sprint Review chat logs into a summary for stakeholders. They paste the transcript and use this prompt:

Summarize the Sprint Review and highlight key decisions and next steps.

They do not specify the intended audience, what counts as a “decision,” or constraints like “only include verified statements from the transcript.” The AI output includes a confident release date that was never agreed.

What is the most likely near-term impact?

  • A. Stakeholders lose trust due to inaccurate, non-transparent commitments in the summary
  • B. The team’s model fine-tuning introduces systemic bias that must be remediated
  • C. Product quality steadily declines over several Sprints as technical debt accumulates
  • D. The organization implements a formal governance program that slows AI adoption for months

Best answer: A

Explanation: The prompt lacks essential context (audience, definitions, and constraints), so the AI is more likely to invent or misclassify items such as “decisions.” When that output is shared externally, the immediate consequence is confusion and reduced trust because stakeholders see commitments that cannot be traced back to what was actually said or decided.

Including context in prompts reduces ambiguity and makes outputs more reliable and auditable. In this scenario, the Scrum Master omitted key context such as (1) the intended audience (e.g., executives vs. customers), (2) definitions (what qualifies as a “decision,” “commitment,” or “next step”), and (3) constraints (only summarize statements explicitly present in the transcript; flag uncertainty; quote lines for decisions). Without these, the AI may fill gaps with plausible-sounding details, creating non-transparent commitments like an invented release date. Sharing that summary creates an immediate trust and transparency problem because stakeholders cannot verify what is true and the team must spend time correcting the record. The closest alternatives describe indirect or longer-term outcomes not implied by a single ambiguous prompt.


Question 13

Topic: AI for Scrum Masters

A Scrum Master wants to use generative AI to translate and simplify a Sprint Review summary so it is understandable to “everyone,” including executives and external partners. The summary includes technical terms and a few decisions made during the Sprint.

What is the most important question to ask first before using AI to translate/simplify the message?

  • A. Which AI model produces the most fluent translations for this domain?
  • B. Can the Developers rewrite the summary later if the AI output is unclear?
  • C. Who exactly is the intended audience, and which terms/points must stay unchanged?
  • D. Should we publish the simplified version immediately and collect feedback afterward?

Best answer: C

Explanation: To translate or simplify responsibly, you first clarify who the message is for and what “correct” looks like. Defining the audience (languages, domain knowledge, reading level) and identifying non-negotiable terms and decisions helps the prompt preserve meaning. This also creates clear criteria to validate the AI output before sharing it.

AI can help translate and simplify communication, but it will optimize for what you ask it to produce. If “everyone” is the audience, the first step is to make the audience explicit (e.g., executives vs. external partners) and set meaning-preservation constraints (e.g., keep product names, metrics, decisions, and commitments exact).

Practical clarifications to request before prompting:

  • Intended recipients and their context (language, familiarity, reading level)
  • What must not change (key decisions, dates, risks, agreed outcomes)
  • Terms that must remain verbatim (glossary/acronyms)
  • Success criteria for the rewritten text (tone, length, format)

With these inputs, you can prompt AI to simplify/translate while keeping critical meaning intact and then validate the result against those constraints.


Question 14

Topic: AI for Scrum Masters

A Scrum Master reviews AI-generated user stories. One story says: “As a Project Manager, I want to sign off the Increment before release.” The prompt and provided context contained nothing about a Project Manager approval step, and the Scrum Team does not use such a role in its way of working.

Which AI term best describes this AI output?

  • A. Hallucination
  • B. Prompt injection
  • C. Automation bias
  • D. PII

Best answer: A

Explanation: This is an example of hallucination: the AI produced a believable but ungrounded role and governance step that was not in the input context. A Scrum Master should treat it as a draft, validate it with the Scrum Team and Product Owner, and rewrite the story to reflect the real user/stakeholder and Scrum Team context.

Hallucination is when an AI generates content that appears credible but is not supported by the prompt, source material, or known facts. In this case, the AI introduced a “Project Manager sign-off” step even though it wasn’t provided in the context and doesn’t align with Scrum’s accountability model.

A Scrum Master should respond by:

  • Asking what real user or stakeholder need the story represents
  • Rewriting the story using an appropriate user/persona and outcome (not adding non-existent Scrum roles)
  • Validating the result with the Product Owner and Developers before adding it to the Product Backlog

The key is to keep humans accountable for backlog quality and ensure AI drafts fit the Scrum Team’s context.


Question 15

Topic: AI for Scrum Masters

During Sprint Planning, the Scrum Master pastes the Product Backlog items into a generative AI tool and shares the suggested Sprint Backlog and task breakdown. They say, “The AI already optimized this-let’s just execute,” and quickly move the team to selection. Developers later complain they feel “told what to do,” and the Sprint Goal is treated as the AI’s plan rather than the team’s.

What is the most likely underlying cause of the problem?

  • A. Missing context in the prompt leading to irrelevant AI output
  • B. Automation bias causing the team to defer to AI over ownership
  • C. Prompt injection that intentionally manipulates the AI response
  • D. Sensitive data misuse that creates distrust in the Scrum Team

Best answer: B

Explanation: The key issue is not the quality of the AI output but how it is used: the Scrum Master frames it as the answer and rushes commitment. That encourages deference to AI and reduces the Developers’ and Product Owner’s ownership of planning decisions. An empowering approach would treat AI as an optional input and keep humans accountable for the plan.

This scenario shows automation bias: people over-trust an AI recommendation and treat it as a decision rather than an input. The Scrum Master amplifies that bias by positioning the AI plan as “optimized” and moving the team quickly to execution, which reduces critical thinking and erodes team ownership of the Sprint Goal and Sprint Backlog.

A more empowering use of AI would be to present the output as a draft, ask the Scrum Team to challenge assumptions, adapt the plan to what they know, and explicitly confirm that the Scrum Team owns the decisions and is accountable for the outcomes. The closest distractors focus on output quality or security, but the stem’s main signal is deference and loss of ownership.


Question 16

Topic: AI for Scrum Masters

Midway through a Sprint, the Developers feel “busy but not finishing.” The Scrum Master wants to use an AI assistant to analyze the Sprint Backlog for WIP overload signals and propose a response before the next Daily Scrum.

Exhibit: Sprint Backlog snapshot

Total items in Sprint Backlog: 18
In progress: 12
Blocked: 5
Items in progress > 3 days: 7
Done: 2

Which action SHOULD AVOID?

  • A. Share a minimized, anonymized export of the board with the AI to flag aging WIP and blocked clusters, then review findings with the Developers
  • B. Ask the AI to suggest possible responses (for example, swarming, splitting work, or a temporary WIP limit) and have the Developers validate and choose
  • C. Upload the full Sprint Backlog with assignee names and incident details to an external AI and ask it to reassign work and send instructions to the team
  • D. Use the AI to draft a few neutral facilitation questions and a simple WIP-aging visualization, and disclose that AI assisted the prep

Best answer: C

Explanation: AI can help detect WIP overload patterns, but the Scrum Team remains accountable for decisions and must protect sensitive information. The safest use is to minimize/anonymize inputs, treat AI output as a hypothesis, and bring insights to the Developers to inspect and adapt. Asking AI to make and broadcast assignments crosses both privacy and self-management boundaries.

Using AI to analyze a Sprint Backlog is appropriate when it supports empiricism: it can quickly summarize signals like too many items in progress, long-running work, and clusters of blocked items. Responsible use means sharing only the data needed (data minimization/anonymization), being transparent that AI contributed, and validating outputs with the Scrum Team.

A good pattern is:

  • Provide the AI a reduced snapshot (states, age-in-state, blockers).
  • Ask it to highlight potential WIP overload signals and possible experiments.
  • Use the output to facilitate a conversation; Developers decide how to adjust their plan and collaborate.

The key boundary is that AI should inform the team’s inspection and adaptation, not replace it or expose sensitive Sprint Backlog details.


Question 17

Topic: AI for Scrum Masters

A Scrum Master uses generative AI to summarize a tense Sprint Review discussion and emails the summary as “what we agreed.” The Scrum Master does not verify the summary with the Product Owner or stakeholders first, and the AI summary incorrectly attributes a key requirement to one stakeholder.

What is the most likely near-term impact?

  • A. The organization bans AI after a future compliance audit.
  • B. Increment quality degrades months later due to poor architecture choices.
  • C. The team speeds up delivery because decisions are now documented.
  • D. Stakeholders challenge accuracy, reducing trust and requiring re-alignment.

Best answer: D

Explanation: When AI output is treated as authoritative without checking it with the people involved, it can amplify misunderstandings. The quickest consequence is erosion of trust and transparency when someone notices the summary doesn’t match what they said or agreed. The team then has to pause to re-clarify decisions through direct conversation.

Generative AI can produce plausible but incorrect summaries, especially in nuanced or conflict-heavy conversations. If a Scrum Master broadcasts an AI-generated “agreement” without validating it with the Product Owner and participants, the AI can inadvertently rewrite history and amplify misunderstanding. The near-term impact is typically social and coordination-related: trust drops and people spend time re-establishing what was actually decided.

A safer pattern is:

  • Use AI as a draft, not a source of truth.
  • Verify key points through direct conversation with the right people.
  • Be transparent that AI assisted and invite corrections.

This prevents avoidable rework and protects collaboration in the next Sprint.


Question 18

Topic: AI for Scrum Masters

Mid-Sprint, several Developers say they feel overloaded and that too many Product Backlog Items are “in progress.” You want to use a generic AI assistant to analyze the current Sprint Backlog for WIP overload signals and suggest response options.

Before you paste/export any Sprint Backlog details into the AI, what should you ask/verify first?

  • A. Whether the Product Owner wants the AI’s recommendations sent to stakeholders
  • B. Whether Sprint Backlog data may be shared, and what must be redacted
  • C. Which Developers should stop starting work on new items immediately
  • D. What the team’s velocity is for the last three Sprints

Best answer: B

Explanation: Using AI on a Sprint Backlog can be helpful, but the first responsibility is to ensure information is handled appropriately. Confirm whether Sprint Backlog content is allowed to be shared with the chosen AI system and what needs masking or summarizing. Once constraints are clear, you can safely define what “WIP overload signals” to detect and what responses to generate.

The core concept is responsible AI use with human accountability: don’t disclose Sprint Backlog data to an AI system until you understand the organization’s data-handling rules and the sensitivity of the content. Sprint Backlogs can include customer identifiers, incident details, or internal system information; sharing that without approval can violate policy and create security/privacy risk.

After verifying constraints, you can proceed with a safe approach such as:

  • Provide only the minimum necessary, de-identified fields (e.g., counts by workflow state, age of items, blocked reasons).
  • State the success criteria (what patterns constitute WIP overload) and request response options the Scrum Team can decide on.
  • Validate the AI output against real work and let the Developers adapt their plan during the Sprint.

Without the policy/data check, any analysis quality is secondary to the risk of inappropriate disclosure.


Question 19

Topic: AI Theory and Primer

A Scrum Master is asked to help a hiring panel move quickly by using a generative AI assistant to rank shortlisted Developer candidates based on past hiring notes and “typical top-performer traits” from the company. In a trial run, the AI repeatedly recommends candidates with similar backgrounds and downranks others using vague language like “not a culture fit.”

Which approach best addresses what is happening, given this is a high-impact decision?

  • A. Ask the AI to “be objective and ignore protected attributes,” then trust the ranking
  • B. Increase randomness in the AI output to reduce any consistent preference patterns
  • C. Use AI only to summarize evidence, then have humans decide; review outputs for biased patterns
  • D. Proceed with AI ranking to save time, and explain decisions later if challenged

Best answer: C

Explanation: The AI is likely mirroring bias embedded in its training data and the organization’s historical hiring notes, which can surface as stereotyped wording and systematically different recommendations for groups. Because hiring is a high-impact decision, the safest approach is to keep accountability with humans and validate AI-assisted outputs for bias before use.

AI systems learn patterns from training data (and, in this case, also from the organization’s past hiring notes). If those inputs contain imbalances or biased judgments, the model can reproduce them in outputs. Bias can appear as consistently favoring candidates with certain backgrounds, using coded or vague criteria (for example, “culture fit”), or applying different standards in the language used to describe similar evidence.

For a high-impact decision like hiring, a responsible approach is to:

  • Use AI for lower-risk support (summarizing evidence, extracting themes).
  • Keep the final evaluation and decision with accountable humans.
  • Review outputs for disparate patterns and biased phrasing before relying on them.

Prompting for “objectivity” alone does not remove bias; it can still be present in the learned patterns and the provided notes.


Question 20

Topic: AI Theory and Primer

A Scrum Master copies 120 anonymous Sprint Retrospective comments into an AI system and asks it to “group the comments into themes based on similarity, without using predefined labels, and list each theme with representative examples.”

Which type of AI task is the system being asked to perform?

  • A. Generation
  • B. Regression
  • C. Classification
  • D. Clustering

Best answer: D

Explanation: The request is to organize existing comments into groups based on similarity, without supplying a fixed set of categories. That is an unsupervised learning pattern where the model discovers groupings in the data. This aligns with clustering rather than predicting numbers, assigning predefined labels, or creating new content from scratch.

Clustering is used when you want an AI system to find natural groupings in data based on similarity, especially when you do not provide predefined labels. In this scenario, the Scrum Master explicitly asks to group retrospective comments into themes “without using predefined labels,” which maps to clustering.

By contrast:

  • Classification assigns one of known labels (e.g., “Process/People/Tools”) to each comment.
  • Regression predicts a numeric value (e.g., forecast cycle time in days).
  • Generation produces new content (e.g., writing new retrospective comments or a narrative summary), even if it may also summarize grouped items.

The key cue is discovering themes from the data rather than applying an existing label set.


Question 21

Topic: AI Security and Ethics

A Scrum Team is experimenting with generative AI to draft code snippets and acceptance criteria. Two Developers have access to an AI tool through a personal subscription, but others (including a remote Developer) cannot access it due to cost and regional restrictions. In Sprint Planning, the two with access consistently arrive with more “finished” proposals and steer decisions.

As Scrum Master, what is the best response to mitigate unfair advantage within the team?

  • A. Allow AI use without constraints because individuals can choose their own productivity tools.
  • B. Ban generative AI for Sprint Planning so everyone contributes at the same baseline.
  • C. Create a working agreement so AI use is transparent and outputs are shared; ensure equitable access via a team-provided option or pairing/training alternative.
  • D. Ask only the two subscribed Developers to use AI and present recommendations to the team.

Best answer: C

Explanation: The core problem is inequitable access creating an advantage that skews collaboration and decisions. A mitigation should preserve transparency and human accountability while ensuring everyone can participate effectively. Establishing explicit team agreements and providing equitable access (or a pairing/training alternative) reduces disadvantage without banning learning and experimentation.

Responsible AI use includes fairness: AI should not create systematic advantage or disadvantage among team members, especially when it affects participation in high-collaboration moments like Sprint Planning. Here, access is unequal (cost/regional restrictions), so the Scrum Master should help the Scrum Team make AI use transparent and inclusive.

A practical mitigation is to:

  • Agree when/where AI may be used and how to disclose it
  • Share AI-generated drafts as team artifacts, not personal leverage
  • Provide equitable access (team-provided option) or compensate via pairing/training and time to review

This keeps decisions human-owned and restores balanced collaboration, instead of relying on individual privilege or prohibiting useful experimentation.


Question 22

Topic: AI for Scrum Masters

A Scrum Master uses a generative AI tool during the Sprint Retrospective to turn the team’s notes into a list of “action items.” Because time is short, the Scrum Master copies the list into the next Sprint plan without the Scrum Team discussing it.

Mid-Sprint, Developers complain they are spending time on vague process changes that don’t address the biggest pain points, and the Product Owner is frustrated because several items compete with Product Backlog priorities.

What is the most likely underlying cause of this situation?

  • A. The team accepted AI-generated action items without human validation and prioritization
  • B. A malicious prompt injection altered the AI output during the Retrospective
  • C. The AI model lacked enough context about the team’s working agreements
  • D. Sensitive information was included in the Retrospective notes sent to the AI

Best answer: A

Explanation: AI can propose action items, but the Scrum Team remains accountable for deciding what to do next. The symptoms here-vague, misaligned, competing work-point to skipping the human step of validating what the AI suggested and prioritizing it against the team’s most important improvements and Product Backlog realities.

The core issue is weak human review: AI-generated Retrospective action items are hypotheses, not decisions. In Scrum, the Scrum Team is accountable for inspecting what happened and adapting in ways that improve effectiveness, which includes agreeing on clear, valuable improvements.

AI output typically needs the team to:

  • Validate accuracy and relevance against what was actually discussed.
  • Make items specific (owner, expected outcome, success measure).
  • Prioritize improvements against capacity and Product Backlog needs.

When that validation and prioritization is skipped, teams often execute low-value or unclear changes and create friction with product priorities-exactly what happened here. Missing context or attacks could be issues, but the stem most strongly indicates automation bias and lack of human decision-making.


Question 23

Topic: AI Theory and Primer

A Product Owner asks the Scrum Master for a fast visual to discuss a new feature in refinement. They have only a short user story and a few UI constraints (no real customer data), and they want a rough wireframe-style mockup to spark conversation-not working code.

Which guidance best fits the situation?

  • A. Apply validation: run tests to verify the generated output.
  • B. Use image generation: text constraints in, draft mockup image out.
  • C. Ensure transparency: label the mockup as AI-assisted in artifacts.
  • D. Apply privacy first: strip sensitive data before using any AI.

Best answer: B

Explanation: Because the desired output is a visual wireframe, an image generation system is the best fit. Its typical input is a textual description of the UI (plus constraints and optionally a sketch/reference image), and its output is an image that can be reviewed and iterated. Text or code generation would be less direct for producing a mockup image.

Choose the generation modality based on the artifact you need and what you can provide as input. For a wireframe/mockup, image generation is appropriate because it can take a concise text prompt describing layout, components, and constraints (and optionally a rough sketch/reference) and produce a draft image output. In contrast, text generation is best when the output is narrative content (summaries, drafts, checklists), and code generation is best when the output is code artifacts (functions, tests, configuration) that still require human review and verification. The key is matching input/output expectations to the goal so the AI output is usable in the Scrum conversation.


Question 24

Topic: Effective AI Prompting

A Scrum Master wants to use AI to draft questions for the upcoming Sprint Retrospective. The Scrum Team agrees to focus on one improvement theme: reducing unplanned work entering the Sprint.

Exhibit: Last Sprint signals

Planned PBIs: 18
Unplanned work added mid-Sprint: 6 items
Carryover to next Sprint: 9 PBIs
Urgent defects: 2
Sprint Goal: Not met

Which prompt best generates theme-focused Retrospective questions and includes a strong evidence/validation step to support trusting the AI output?

  • A. Using the signals below, draft 6 Retro questions only on reducing unplanned work; for each, state which signal it validates; replace any that don’t map.
  • B. Create fun icebreakers and Retro questions; optimize for variety and creativity.
  • C. Summarize Retrospective best practices and provide a generic question bank.
  • D. Draft 10 Retrospective questions covering everything that happened last Sprint.

Best answer: A

Explanation: A good prompt names the single improvement theme, provides relevant Sprint evidence, and adds an explicit validation requirement. Mapping each generated question back to a specific data point is a concrete verification step that supports empiricism and helps the team judge relevance. It also makes it easier to discard or revise questions that don’t fit the agreed theme.

For Retrospectives, AI is most useful when it drafts focused questions the Scrum Team can inspect and adapt. A strong prompt should (1) constrain the scope to one improvement theme and (2) require an evidence-based check so the output can be trusted.

In this scenario, the team already has observable signals (unplanned items, carryover, missed Sprint Goal). Asking the AI to generate questions that explicitly tie back to those signals is a practical data-verification step: it makes the reasoning inspectable and reveals where the AI is guessing or drifting off-theme. The Scrum Team still decides which questions to use and what actions to take.

Generic question banks or “be creative” instructions can look useful, but they don’t validate alignment to the chosen theme or to actual Sprint evidence.

Revised on Sunday, April 26, 2026