Practice Scrum.org PSM-AI with free sample questions, timed mock exams, and detailed explanations for Scrum roles, events, and decision-making.
PSM-AI is Scrum.org’s AI Essentials assessment for Scrum Masters who need to use AI responsibly without weakening Scrum fundamentals. If you are searching for PSM-AI sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same account.
Choose PSM-AI when your decisions live inside Scrum Master work: facilitation, coaching, event support, prompting, and ethical AI use around the Scrum Team. If your role is Product Owner, compare PSPO-AI . If you need a broader AI initiative-management route, compare PMI-CPMAI or AIPM .
Start a practice session for Scrum.org Professional Scrum Master - AI Essentials (PSM-AI) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.
Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.
PSM-AI rewards answers that combine practical AI use with Scrum-friendly leadership, ethical judgment, strong prompt design, and clear awareness of security and governance risks.
| Topic | Weight | Estimated questions |
|---|---|---|
| AI Theory and Primer | 25% | 5 |
| AI Security and Ethics | 25% | 5 |
| AI for Scrum Masters | 25% | 5 |
| Effective AI Prompting | 25% | 5 |
These sample questions include the same mix of single-answer and multiple-response items you should practice for PSM-AI. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.
Topic: AI Security and Ethics
A Scrum Team wants to use an AI assistant to propose edge-case test ideas based on recent production defect reports. The defect reports include customer names, email addresses, and account IDs, but the team only needs the AI to understand the data shape and typical error patterns.
Which practice best fits this situation?
Best answer: B
Explanation: Use synthetic examples when real data contains sensitive information and the prompt’s goal can be achieved without it. Here, the AI needs only the data format and failure patterns, not customer identifiers. Substituting synthetic, representative samples supports privacy and confidentiality while still enabling effective prompting.
The core principle is minimizing sensitive data exposure in AI prompts. If the task can be accomplished with representative structure (fields, ranges, formats, and typical failures) rather than actual customer content, prefer synthetic examples (or heavily generalized samples) to avoid sharing personally identifiable or confidential information.
A practical approach is:
Transparency, human review, and accountability still matter, but they do not mitigate the privacy risk of sending unnecessary real customer data.
Topic: AI Theory and Primer
A Scrum Master has a 6-page transcript from a Sprint Retrospective. The Scrum Team wants to share a brief recap with management that includes key themes, decisions made, and action items. They explicitly want to avoid adding new ideas or interpretations, and the team will review the output before sending it.
Which AI capability best fits this need without overreach?
Best answer: A
Explanation: The task is to condense existing information into a short recap while avoiding new ideas or interpretation. Summarization is designed for this and can be validated by having the team review the output before sharing it. This aligns with accountable, empiricism-supporting use of AI.
Choose the capability that matches the outcome requested and minimizes the risk of the AI “doing more” than asked. Here, the team needs a shorter version of what already happened: themes, decisions, and action items derived from the transcript. That is a summarization task.
To keep it responsible and empiricism-aligned:
Other capabilities may be useful in different situations, but they do not directly produce the requested recap and can increase the chance of overreach.
Topic: Effective AI Prompting
A Scrum Master asks an AI assistant in one message: “Turn the top 12 Product Backlog items into refined user stories with acceptance criteria, split into tasks, identify dependencies/risks, propose a Sprint Goal, and draft a Sprint plan.” They include the Product Goal, Definition of Done, and the Developers’ capacity.
The AI response is generic, mixes refinement and planning details, and contains contradictory acceptance criteria. The Developers argue about what to trust and what to discard.
What is the most likely underlying cause of this situation?
Best answer: C
Explanation: This is primarily a decomposition problem: the prompt asks for several distinct activities (refinement, tasking, risk analysis, Sprint planning) in one pass. Even with good context, the model can produce blended and internally inconsistent results when the request is not staged. Decomposing into smaller prompts supports iteration and validation of each part before moving on.
Decomposition improves AI output quality when a request contains multiple goals, mixed levels of detail, or needs internal consistency across artifacts. In the scenario, refinement and Sprint planning are different outcomes with different constraints, so a single “do everything” prompt encourages generic filler and contradictions.
A better approach is to split the work into a short dialogue, for example:
This keeps each step reviewable and reduces confusion about what is reliable before the team uses it in decisions.
Topic: AI Security and Ethics
A Scrum Master uses a generative AI tool to draft a Sprint Review invitation that will be emailed to a diverse stakeholder group, including people who use screen readers. The Scrum Master wants to ensure the message is inclusive (neutral language and accessible formatting) before sending it.
Exhibit: AI draft (excerpt)
Hey guys-join us for the Sprint Review!
We'll show the finished work and you can jump in anytime.
See the key results in the image below.
[embedded image with no description]
What is the best evidence/validation step to take before trusting and sending this AI-generated invitation?
Best answer: C
Explanation: The most trustworthy way to confirm inclusivity is to validate the draft against explicit, authoritative criteria for neutral language and accessibility. A checklist or style guide turns a subjective judgment into a repeatable verification step (e.g., replacing gendered terms and adding alt text). This supports responsible AI use while keeping human accountability for the final communication.
Responsible AI-assisted communication requires human validation, especially for inclusivity and accessibility. A generative model can produce friendly text that still contains non-neutral terms (e.g., “guys”) or accessibility gaps (e.g., images without alt text). The strongest validation is to compare the draft to an authoritative, predefined standard-such as your organization’s inclusive-language guide and accessibility checklist-then make the needed edits.
A practical validation approach is:
Self-assurance from the AI or “it sounds fine” are weak signals and don’t provide evidence that the message meets accessibility needs.
Topic: AI Security and Ethics
A Developer shares an AI-generated suggestion in Sprint Planning: “It’s safe to commit API keys to a private repo if the repo is access-controlled.” You want to use an AI assistant to help you draft a message to the Scrum Team that corrects the risk without shaming anyone.
Constraints: audience is the Scrum Team; tone is supportive and concise; output must be (1) a short Slack message and (2) a checklist. Do not include proprietary system names, repo links, or customer data. The result must encourage validation against authoritative security guidance and human review.
Which prompt is best?
Best answer: A
Explanation: Hallucinated security guidance is a common risk, so the safest use of AI is to have it produce a draft that is explicitly subject to verification. The best prompt sets confidentiality constraints, asks for assumptions and uncertainties, and directs the team to validate against organizational policy and authoritative sources rather than treating the AI output as final security direction.
The core issue is that AI can produce plausible-sounding but incorrect security advice. A Scrum Master can still use AI effectively by constraining the request to a draft communication and making validation explicit: avoid sharing sensitive context, ask the AI to surface assumptions and uncertainty, and require a “verify with” section that points to the organization’s security policy and recognized guidance (for example OWASP or NIST) and encourages checking with security experts.
This keeps accountability with humans, reduces the chance of acting on hallucinations, and supports empiricism: treat the AI output as a hypothesis to be validated, not as an authoritative decision. The closest traps are prompts that demand certainty or that bypass human/security review.
Topic: AI Security and Ethics
A Scrum Team wants to start using a generative AI service to draft Product Backlog Items by summarizing recent customer support tickets. They plan to “just try it” during the next Sprint and paste ticket text into the AI to get suggested stories and acceptance criteria. As Scrum Master, what should you ask to clarify first before deciding whether to involve legal, security, or privacy stakeholders?
Best answer: B
Explanation: The deciding factor for involving legal, security, or privacy is whether sensitive or regulated information will be used and where it will go. Clarifying the data types and classification (e.g., personal data, confidential business data) and any processing constraints establishes the risk level and triggers the right stakeholders. Without that, you cannot make a responsible adoption decision.
In AI adoption, the first clarification should reduce the largest governance risk: exposing data in ways that violate policy, contracts, or privacy/security obligations. Asking what ticket data will be shared (and its classification and handling constraints) reveals whether it includes personal data, confidential customer information, or other restricted content, and whether it can be processed by an external service. That information determines if you must involve privacy/security/legal for guidance on permitted use, safeguards (e.g., redaction, approved environments), and transparency requirements. Questions about formatting, volume, or meeting logistics may matter later, but they do not establish whether the experiment is allowed or safe to run.
Topic: AI Security and Ethics
A Product Owner used an AI assistant to propose an initial ordering of the Product Backlog based on recent production incidents and customer feedback themes. Stakeholders ask the Scrum Master how to interpret the AI’s “reasons” for the suggested ordering.
Which statement is INCORRECT and should be avoided?
Best answer: D
Explanation: Explainability is about how an AI arrived at an output, not whether that output is true or appropriate. A responsible Scrum Master helps stakeholders understand the AI’s rationale as input that still needs human judgment and validation. They also maintain transparency about limitations and keep accountability with the humans in Scrum roles.
Explainability (or an AI “reason”) can increase transparency and make it easier to challenge an output, but it does not prove correctness. In a Scrum context, AI suggestions are decision support: the Scrum Team should validate them against real evidence, context, and desired outcomes, and communicate that validation has occurred (or will occur).
What to communicate typically includes:
The key takeaway is to avoid equating a plausible explanation with a correct or safe decision.
Topic: Effective AI Prompting
A Scrum Master wants to use AI to draft a 90-minute Sprint Retrospective workshop agenda for a 7-person Scrum Team (remote). The team’s focus is reducing cycle time and improving WIP discipline, and the Scrum Master wants the agenda to result in two actionable improvement experiments and an update to the team working agreement.
Which prompt is best to produce an agenda with clear timeboxes and desired outcomes while also supporting validation of the AI output before it is used with the team?
Best answer: D
Explanation: The best prompt provides clear context (event, duration, participants, focus) and explicit constraints (timeboxes and desired outcomes tied to specific results). It also requests a lightweight validation aid, such as assumptions/risks and what to confirm with the team, so the Scrum Master can review and adapt the agenda responsibly before use.
Effective prompting for a Scrum workshop agenda means specifying the event, constraints, and the outcomes you want to enable, then adding a built-in way to validate the AI’s draft. In this scenario, the Scrum Master needs a 90-minute remote Sprint Retrospective agenda that produces concrete results (two experiments and a working-agreement update).
A strong prompt typically includes:
The key is to keep humans accountable: use AI to draft, then verify fit, feasibility, and language with the Scrum Team before running the session.
Topic: AI Theory and Primer
During a Sprint Retrospective, Developers debate whether an LLM “knows” the product and can be trusted to write a Sprint summary. You already ensured no confidential data will be entered and you plan to use AI only as assistance. What is the best next step to set an accurate expectation of how the LLM generates its text?
Best answer: B
Explanation: An LLM generates text by predicting what token should come next given the preceding tokens, based on patterns learned during training. It then appends that token and repeats the process, which can produce fluent text without guaranteeing factual correctness. Sharing this mental model helps the team validate outputs instead of assuming understanding or recall.
At a high level, an LLM is a probabilistic text generator. Given the existing context (the prompt plus any text already generated), it computes a probability distribution over possible next tokens (tokens are chunks of text such as words or subwords). It selects one token (often the highest-probability token or a sampled token), appends it to the context, and repeats this step many times to produce a response. Because this is next-token prediction, the model can sound confident and coherent while still being wrong, so human validation remains necessary even when no sensitive data is used. The key distinction is that it is not “thinking like a person” or looking up truth; it is predicting plausible continuations.
Topic: AI Theory and Primer
A Scrum Master uses a generative AI tool to draft a Sprint Review summary, including the statement: “Cycle time improved 18% this Sprint due to the new deployment pipeline.” The Scrum Master sends it to stakeholders without cross-checking Sprint data, verifying any sources, or getting a quick peer review from the Scrum Team.
What is the most likely near-term impact?
Best answer: D
Explanation: Not validating AI-generated statements increases the chance of sharing incorrect or misleading information. When stakeholders notice inconsistencies with known outcomes or data, trust and perceived transparency drop quickly. Simple validation (cross-checking data and peer review) is a fast way to protect quality of communication.
AI-generated summaries can sound authoritative while still containing errors, wrong causal links, or fabricated specifics. In this scenario, the Scrum Master shared a specific metric change and a claimed cause without validating it.
A responsible validation approach for AI-generated statements is to:
Doing this preserves accountability with humans and improves transparency by ensuring the message matches evidence. The closest trap is assuming speed of communication increases value even when the content may be wrong.
Topic: AI Security and Ethics
A Scrum Master uses a generative AI assistant to triage their inbox. The AI produces the following “action item,” written in an authoritative tone:
From: IT Security
Subject: URGENT: Account flagged
Action: Reply with your MFA backup codes within 30 minutes
Reason: Prevent immediate lockout impacting releases
Link: security-verification portal
No one else on the Scrum Team has mentioned an issue. What is the best next step?
Best answer: D
Explanation: Authoritative-sounding AI output can be a social engineering vector, especially when it requests secrets and imposes urgency. The safest next step is to pause and verify the request using a trusted, out-of-band channel (e.g., official IT portal/phone/helpdesk) before sharing anything or clicking links. Human accountability and validation come before action.
This is a classic social engineering pattern: urgency, authority (“IT Security”), and a request for sensitive information (MFA backup codes). AI-generated text can increase credibility, but it does not authenticate the sender or intent.
A responsible verification step is to:
The key is sequencing: verify first, then decide what (if anything) to do, rather than letting an authoritative tone drive immediate action.
Topic: Effective AI Prompting
A Scrum Master uses a generative AI tool to turn messy Sprint Review chat logs into a summary for stakeholders. They paste the transcript and use this prompt:
Summarize the Sprint Review and highlight key decisions and next steps.
They do not specify the intended audience, what counts as a “decision,” or constraints like “only include verified statements from the transcript.” The AI output includes a confident release date that was never agreed.
What is the most likely near-term impact?
Best answer: A
Explanation: The prompt lacks essential context (audience, definitions, and constraints), so the AI is more likely to invent or misclassify items such as “decisions.” When that output is shared externally, the immediate consequence is confusion and reduced trust because stakeholders see commitments that cannot be traced back to what was actually said or decided.
Including context in prompts reduces ambiguity and makes outputs more reliable and auditable. In this scenario, the Scrum Master omitted key context such as (1) the intended audience (e.g., executives vs. customers), (2) definitions (what qualifies as a “decision,” “commitment,” or “next step”), and (3) constraints (only summarize statements explicitly present in the transcript; flag uncertainty; quote lines for decisions). Without these, the AI may fill gaps with plausible-sounding details, creating non-transparent commitments like an invented release date. Sharing that summary creates an immediate trust and transparency problem because stakeholders cannot verify what is true and the team must spend time correcting the record. The closest alternatives describe indirect or longer-term outcomes not implied by a single ambiguous prompt.
Topic: AI for Scrum Masters
A Scrum Master wants to use generative AI to translate and simplify a Sprint Review summary so it is understandable to “everyone,” including executives and external partners. The summary includes technical terms and a few decisions made during the Sprint.
What is the most important question to ask first before using AI to translate/simplify the message?
Best answer: C
Explanation: To translate or simplify responsibly, you first clarify who the message is for and what “correct” looks like. Defining the audience (languages, domain knowledge, reading level) and identifying non-negotiable terms and decisions helps the prompt preserve meaning. This also creates clear criteria to validate the AI output before sharing it.
AI can help translate and simplify communication, but it will optimize for what you ask it to produce. If “everyone” is the audience, the first step is to make the audience explicit (e.g., executives vs. external partners) and set meaning-preservation constraints (e.g., keep product names, metrics, decisions, and commitments exact).
Practical clarifications to request before prompting:
With these inputs, you can prompt AI to simplify/translate while keeping critical meaning intact and then validate the result against those constraints.
Topic: AI for Scrum Masters
A Scrum Master reviews AI-generated user stories. One story says: “As a Project Manager, I want to sign off the Increment before release.” The prompt and provided context contained nothing about a Project Manager approval step, and the Scrum Team does not use such a role in its way of working.
Which AI term best describes this AI output?
Best answer: A
Explanation: This is an example of hallucination: the AI produced a believable but ungrounded role and governance step that was not in the input context. A Scrum Master should treat it as a draft, validate it with the Scrum Team and Product Owner, and rewrite the story to reflect the real user/stakeholder and Scrum Team context.
Hallucination is when an AI generates content that appears credible but is not supported by the prompt, source material, or known facts. In this case, the AI introduced a “Project Manager sign-off” step even though it wasn’t provided in the context and doesn’t align with Scrum’s accountability model.
A Scrum Master should respond by:
The key is to keep humans accountable for backlog quality and ensure AI drafts fit the Scrum Team’s context.
Topic: AI for Scrum Masters
During Sprint Planning, the Scrum Master pastes the Product Backlog items into a generative AI tool and shares the suggested Sprint Backlog and task breakdown. They say, “The AI already optimized this-let’s just execute,” and quickly move the team to selection. Developers later complain they feel “told what to do,” and the Sprint Goal is treated as the AI’s plan rather than the team’s.
What is the most likely underlying cause of the problem?
Best answer: B
Explanation: The key issue is not the quality of the AI output but how it is used: the Scrum Master frames it as the answer and rushes commitment. That encourages deference to AI and reduces the Developers’ and Product Owner’s ownership of planning decisions. An empowering approach would treat AI as an optional input and keep humans accountable for the plan.
This scenario shows automation bias: people over-trust an AI recommendation and treat it as a decision rather than an input. The Scrum Master amplifies that bias by positioning the AI plan as “optimized” and moving the team quickly to execution, which reduces critical thinking and erodes team ownership of the Sprint Goal and Sprint Backlog.
A more empowering use of AI would be to present the output as a draft, ask the Scrum Team to challenge assumptions, adapt the plan to what they know, and explicitly confirm that the Scrum Team owns the decisions and is accountable for the outcomes. The closest distractors focus on output quality or security, but the stem’s main signal is deference and loss of ownership.
Topic: AI for Scrum Masters
Midway through a Sprint, the Developers feel “busy but not finishing.” The Scrum Master wants to use an AI assistant to analyze the Sprint Backlog for WIP overload signals and propose a response before the next Daily Scrum.
Exhibit: Sprint Backlog snapshot
Total items in Sprint Backlog: 18
In progress: 12
Blocked: 5
Items in progress > 3 days: 7
Done: 2
Which action SHOULD AVOID?
Best answer: C
Explanation: AI can help detect WIP overload patterns, but the Scrum Team remains accountable for decisions and must protect sensitive information. The safest use is to minimize/anonymize inputs, treat AI output as a hypothesis, and bring insights to the Developers to inspect and adapt. Asking AI to make and broadcast assignments crosses both privacy and self-management boundaries.
Using AI to analyze a Sprint Backlog is appropriate when it supports empiricism: it can quickly summarize signals like too many items in progress, long-running work, and clusters of blocked items. Responsible use means sharing only the data needed (data minimization/anonymization), being transparent that AI contributed, and validating outputs with the Scrum Team.
A good pattern is:
The key boundary is that AI should inform the team’s inspection and adaptation, not replace it or expose sensitive Sprint Backlog details.
Topic: AI for Scrum Masters
A Scrum Master uses generative AI to summarize a tense Sprint Review discussion and emails the summary as “what we agreed.” The Scrum Master does not verify the summary with the Product Owner or stakeholders first, and the AI summary incorrectly attributes a key requirement to one stakeholder.
What is the most likely near-term impact?
Best answer: D
Explanation: When AI output is treated as authoritative without checking it with the people involved, it can amplify misunderstandings. The quickest consequence is erosion of trust and transparency when someone notices the summary doesn’t match what they said or agreed. The team then has to pause to re-clarify decisions through direct conversation.
Generative AI can produce plausible but incorrect summaries, especially in nuanced or conflict-heavy conversations. If a Scrum Master broadcasts an AI-generated “agreement” without validating it with the Product Owner and participants, the AI can inadvertently rewrite history and amplify misunderstanding. The near-term impact is typically social and coordination-related: trust drops and people spend time re-establishing what was actually decided.
A safer pattern is:
This prevents avoidable rework and protects collaboration in the next Sprint.
Topic: AI for Scrum Masters
Mid-Sprint, several Developers say they feel overloaded and that too many Product Backlog Items are “in progress.” You want to use a generic AI assistant to analyze the current Sprint Backlog for WIP overload signals and suggest response options.
Before you paste/export any Sprint Backlog details into the AI, what should you ask/verify first?
Best answer: B
Explanation: Using AI on a Sprint Backlog can be helpful, but the first responsibility is to ensure information is handled appropriately. Confirm whether Sprint Backlog content is allowed to be shared with the chosen AI system and what needs masking or summarizing. Once constraints are clear, you can safely define what “WIP overload signals” to detect and what responses to generate.
The core concept is responsible AI use with human accountability: don’t disclose Sprint Backlog data to an AI system until you understand the organization’s data-handling rules and the sensitivity of the content. Sprint Backlogs can include customer identifiers, incident details, or internal system information; sharing that without approval can violate policy and create security/privacy risk.
After verifying constraints, you can proceed with a safe approach such as:
Without the policy/data check, any analysis quality is secondary to the risk of inappropriate disclosure.
Topic: AI Theory and Primer
A Scrum Master is asked to help a hiring panel move quickly by using a generative AI assistant to rank shortlisted Developer candidates based on past hiring notes and “typical top-performer traits” from the company. In a trial run, the AI repeatedly recommends candidates with similar backgrounds and downranks others using vague language like “not a culture fit.”
Which approach best addresses what is happening, given this is a high-impact decision?
Best answer: C
Explanation: The AI is likely mirroring bias embedded in its training data and the organization’s historical hiring notes, which can surface as stereotyped wording and systematically different recommendations for groups. Because hiring is a high-impact decision, the safest approach is to keep accountability with humans and validate AI-assisted outputs for bias before use.
AI systems learn patterns from training data (and, in this case, also from the organization’s past hiring notes). If those inputs contain imbalances or biased judgments, the model can reproduce them in outputs. Bias can appear as consistently favoring candidates with certain backgrounds, using coded or vague criteria (for example, “culture fit”), or applying different standards in the language used to describe similar evidence.
For a high-impact decision like hiring, a responsible approach is to:
Prompting for “objectivity” alone does not remove bias; it can still be present in the learned patterns and the provided notes.
Topic: AI Theory and Primer
A Scrum Master copies 120 anonymous Sprint Retrospective comments into an AI system and asks it to “group the comments into themes based on similarity, without using predefined labels, and list each theme with representative examples.”
Which type of AI task is the system being asked to perform?
Best answer: D
Explanation: The request is to organize existing comments into groups based on similarity, without supplying a fixed set of categories. That is an unsupervised learning pattern where the model discovers groupings in the data. This aligns with clustering rather than predicting numbers, assigning predefined labels, or creating new content from scratch.
Clustering is used when you want an AI system to find natural groupings in data based on similarity, especially when you do not provide predefined labels. In this scenario, the Scrum Master explicitly asks to group retrospective comments into themes “without using predefined labels,” which maps to clustering.
By contrast:
The key cue is discovering themes from the data rather than applying an existing label set.
Topic: AI Security and Ethics
A Scrum Team is experimenting with generative AI to draft code snippets and acceptance criteria. Two Developers have access to an AI tool through a personal subscription, but others (including a remote Developer) cannot access it due to cost and regional restrictions. In Sprint Planning, the two with access consistently arrive with more “finished” proposals and steer decisions.
As Scrum Master, what is the best response to mitigate unfair advantage within the team?
Best answer: C
Explanation: The core problem is inequitable access creating an advantage that skews collaboration and decisions. A mitigation should preserve transparency and human accountability while ensuring everyone can participate effectively. Establishing explicit team agreements and providing equitable access (or a pairing/training alternative) reduces disadvantage without banning learning and experimentation.
Responsible AI use includes fairness: AI should not create systematic advantage or disadvantage among team members, especially when it affects participation in high-collaboration moments like Sprint Planning. Here, access is unequal (cost/regional restrictions), so the Scrum Master should help the Scrum Team make AI use transparent and inclusive.
A practical mitigation is to:
This keeps decisions human-owned and restores balanced collaboration, instead of relying on individual privilege or prohibiting useful experimentation.
Topic: AI for Scrum Masters
A Scrum Master uses a generative AI tool during the Sprint Retrospective to turn the team’s notes into a list of “action items.” Because time is short, the Scrum Master copies the list into the next Sprint plan without the Scrum Team discussing it.
Mid-Sprint, Developers complain they are spending time on vague process changes that don’t address the biggest pain points, and the Product Owner is frustrated because several items compete with Product Backlog priorities.
What is the most likely underlying cause of this situation?
Best answer: A
Explanation: AI can propose action items, but the Scrum Team remains accountable for deciding what to do next. The symptoms here-vague, misaligned, competing work-point to skipping the human step of validating what the AI suggested and prioritizing it against the team’s most important improvements and Product Backlog realities.
The core issue is weak human review: AI-generated Retrospective action items are hypotheses, not decisions. In Scrum, the Scrum Team is accountable for inspecting what happened and adapting in ways that improve effectiveness, which includes agreeing on clear, valuable improvements.
AI output typically needs the team to:
When that validation and prioritization is skipped, teams often execute low-value or unclear changes and create friction with product priorities-exactly what happened here. Missing context or attacks could be issues, but the stem most strongly indicates automation bias and lack of human decision-making.
Topic: AI Theory and Primer
A Product Owner asks the Scrum Master for a fast visual to discuss a new feature in refinement. They have only a short user story and a few UI constraints (no real customer data), and they want a rough wireframe-style mockup to spark conversation-not working code.
Which guidance best fits the situation?
Best answer: B
Explanation: Because the desired output is a visual wireframe, an image generation system is the best fit. Its typical input is a textual description of the UI (plus constraints and optionally a sketch/reference image), and its output is an image that can be reviewed and iterated. Text or code generation would be less direct for producing a mockup image.
Choose the generation modality based on the artifact you need and what you can provide as input. For a wireframe/mockup, image generation is appropriate because it can take a concise text prompt describing layout, components, and constraints (and optionally a rough sketch/reference) and produce a draft image output. In contrast, text generation is best when the output is narrative content (summaries, drafts, checklists), and code generation is best when the output is code artifacts (functions, tests, configuration) that still require human review and verification. The key is matching input/output expectations to the goal so the AI output is usable in the Scrum conversation.
Topic: Effective AI Prompting
A Scrum Master wants to use AI to draft questions for the upcoming Sprint Retrospective. The Scrum Team agrees to focus on one improvement theme: reducing unplanned work entering the Sprint.
Exhibit: Last Sprint signals
Planned PBIs: 18
Unplanned work added mid-Sprint: 6 items
Carryover to next Sprint: 9 PBIs
Urgent defects: 2
Sprint Goal: Not met
Which prompt best generates theme-focused Retrospective questions and includes a strong evidence/validation step to support trusting the AI output?
Best answer: A
Explanation: A good prompt names the single improvement theme, provides relevant Sprint evidence, and adds an explicit validation requirement. Mapping each generated question back to a specific data point is a concrete verification step that supports empiricism and helps the team judge relevance. It also makes it easier to discard or revise questions that don’t fit the agreed theme.
For Retrospectives, AI is most useful when it drafts focused questions the Scrum Team can inspect and adapt. A strong prompt should (1) constrain the scope to one improvement theme and (2) require an evidence-based check so the output can be trusted.
In this scenario, the team already has observable signals (unplanned items, carryover, missed Sprint Goal). Asking the AI to generate questions that explicitly tie back to those signals is a practical data-verification step: it makes the reasoning inspectable and reveals where the AI is guessing or drifting off-theme. The Scrum Team still decides which questions to use and what actions to take.
Generic question banks or “be creative” instructions can look useful, but they don’t validate alignment to the chosen theme or to actual Sprint evidence.