Try 10 focused PSM-AI questions on Effective AI Prompting, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PSM-AI |
| Topic area | Effective AI Prompting |
| Blueprint weight | 25% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Effective AI Prompting for PSM-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Effective AI Prompting
A Scrum Team started using a generative AI assistant during refinement and to draft test-case ideas. After two Sprints, the Scrum Master reports success using metrics like “prompts per day” and “AI-generated items per Sprint.” However, cycle time has not improved and stakeholders are reporting more escaped defects.
What is the best next step to determine whether AI use is improving outcomes rather than just increasing activity?
Best answer: A
What this tests: Effective AI Prompting
Explanation: Counting AI activity (prompts, AI-generated items) does not show whether delivery outcomes improved. The next step is to agree on a small set of outcome-based measures tied to the team’s goals and baseline them. Then the Scrum Team can inspect and adapt based on observed changes across Sprints while keeping humans accountable for results.
To measure whether AI is helping, treat its use as an experiment and evaluate it with outcome metrics that matter to stakeholders and product delivery. In this scenario, activity metrics are rising but cycle time and quality signals are worsening, so the team needs empiricism: define “better” in measurable terms, capture a baseline, and inspect changes over time.
A practical next step is:
This keeps accountability with the Scrum Team and avoids optimizing for AI usage volume instead of outcomes.
Define and baseline outcome metrics (e.g., cycle time, escaped defects, stakeholder satisfaction) so AI use can be evaluated empirically over subsequent Sprints.
Topic: Effective AI Prompting
A Scrum Master wants to help the Scrum Team learn faster from Sprint Review feedback. The notes include customer names, email addresses, and account IDs. The organization’s guidance says confidential data must not be sent to non-approved AI services.
The team asks you to “just paste the notes into an AI assistant and get the top themes before the Retrospective.” What is the best next step?
Best answer: B
What this tests: Effective AI Prompting
Explanation: Responsible AI use starts by clarifying the goal and checking data sensitivity before sharing anything with an AI system. Here, the notes contain confidential identifiers, so the next step is to sanitize the data or use an approved environment, then prompt with clear constraints. Finally, the team should validate the themes against the original source and their context.
To use AI ethically while improving outcomes, follow a simple workflow: clarify the goal, assess data sensitivity, then craft a constrained prompt and validate outputs with humans accountable for the decision. In this scenario, the immediate risk is disclosing confidential customer identifiers to a non-approved AI service, which is an avoidable shortcut.
A practical next-step sequence is:
Speed comes from preparation and validation, not from skipping privacy checks or delegating accountability to AI.
It addresses data sensitivity first, then uses AI with constraints and human validation to support learning without cutting ethical corners.
Topic: Effective AI Prompting
A Scrum Master uses generative AI to summarize Sprint Planning notes into a message for the Scrum Team. The original notes contain several qualifiers such as “only if Security approves” and “assumes vendor API is available,” but the AI summary repeatedly presents the decisions as unconditional, causing confusion and rework.
Which is the most likely underlying cause?
Best answer: B
What this tests: Effective AI Prompting
Explanation: The core issue is that the prompt did not constrain the summary to retain conditionality. To keep caveats, the request should explicitly require a dedicated section for assumptions/constraints and prohibit turning conditional statements into definitive ones. Clear output structure and retention rules improve summary fidelity for nuanced decisions.
Summaries often “smooth out” uncertainty unless you explicitly tell the model to preserve it. In this scenario, the notes already contain qualifiers, but the prompt likely asked for a short summary without requiring conditional language, assumptions, dependencies, or risks to be retained.
A practical technique is to add prompt constraints such as:
This targets the failure mode directly and reduces confusion while keeping humans accountable for final decisions.
Without explicit instructions and structure to retain qualifiers, the model tends to compress nuance into definitive statements.
Topic: Effective AI Prompting
A Scrum Master drafts a prompt for an AI assistant: “You are a strict compliance auditor. Review our Sprint artifacts and tell us what Scrum events and controls we must add.”
The AI output recommends adding a weekly status meeting and requiring Product Owner sign-off on each task.
What is the best evidence/validation step to decide whether to trust this output, given that role/persona prompting can introduce bias?
Best answer: B
What this tests: Effective AI Prompting
Explanation: Persona prompting can steer an AI toward a particular worldview (e.g., adding controls), so you need validation that separates bias from facts. Re-running the prompt without the persona and then checking any claims against the Scrum Guide provides independent evidence and an authoritative reference before acting.
Role/persona prompting is helpful for tone, structure, and perspective, but it can also bias content by overemphasizing the persona’s goals (like compliance and control). In this scenario, the recommendations sound plausible yet may conflict with Scrum’s minimal events and the Scrum Team’s autonomy.
A strong validation approach is to remove the persona and compare outputs to identify persona-driven additions, then validate any remaining claims by cross-checking with the Scrum Guide (and, where applicable, the team’s working agreements). This uses independent comparison plus an authoritative source, rather than relying on the AI to justify itself.
The key is to treat persona-shaped outputs as hypotheses until verified.
Comparing a neutral prompt to the persona-based output and cross-checking against an authoritative source helps detect persona-driven bias and validate accuracy.
Topic: Effective AI Prompting
You are a Scrum Master and have 10 minutes before Sprint Planning. The Product Owner asks for a quick plan to refine three unclear Product Backlog Items with the Developers later today, but you want to avoid introducing extra process or new meetings.
Which AI prompt is the best fit for this situation?
Best answer: C
What this tests: Effective AI Prompting
Explanation: With time pressure, the best prompt asks for a concise, timeboxed step-by-step plan that is directly usable today. Adding decision points keeps it practical without inventing new process. The goal is “just enough structure” to reduce uncertainty while preserving the Scrum Team’s autonomy and focus.
When asking an AI for a step-by-step plan, constrain it so the output is actionable but lightweight. In this scenario, the key discriminator is time pressure, so the prompt should force brevity and focus while still providing a clear sequence the Scrum Team can adapt.
Useful constraints to include are:
Prompts that ask for SOPs, exhaustive checklists, or comprehensive catalogs of techniques tend to create unnecessary process and increase reading/coordination overhead, which is the opposite of what you need right before Sprint Planning.
It requests a minimal, timeboxed sequence with only the essential steps and checkpoints, avoiding unnecessary ceremony under time pressure.
Topic: Effective AI Prompting
A Scrum Team has started using generative AI to speed up Product Backlog refinement and Sprint Planning. Within two Sprints, each Developer is using different AI tools and ad-hoc prompts, creating inconsistent user story formats and duplicate work. A security guideline states that confidential customer data must not be entered into unapproved AI systems, and stakeholders want predictable, reviewable outputs. The team has 60 minutes available before the next refinement session and you want to preserve team autonomy.
What is the BEST next action for the Scrum Master?
Best answer: A
What this tests: Effective AI Prompting
Explanation: The core problem is tool sprawl causing inconsistent, risky, and non-repeatable results. A timeboxed, team-owned agreement on a small set of approved AI workflows (including what data can be used, prompt patterns, and how outputs are validated) standardizes the work without removing the team’s ability to adapt. This also supports transparency and predictable stakeholder outcomes.
Avoiding tool sprawl is mainly a workflow and governance problem, not a “pick the fanciest tool” problem. In this situation, the Scrum Master should help the team quickly converge on a small set of approved, repeatable AI workflows that fit the security constraint and produce consistent, reviewable outputs.
In the 60-minute session, aim to:
This preserves team autonomy (the team chooses and evolves the workflows) while reducing duplication, risk, and inconsistency.
This reduces tool sprawl by standardizing a few approved, secure workflows while keeping the team accountable for validation and free to improve the playbook.
Topic: Effective AI Prompting
A Scrum Master asks a generative AI model to draft Sprint Review notes but provides no attendee list, decisions made, or action items. The model produces confident, detailed notes that include specific decisions and owners that were never provided.
Which term best describes this behavior and signals you should supply or gather the missing facts?
Best answer: B
What this tests: Effective AI Prompting
Explanation: This is hallucination: the model fills in missing information with confident but unverified details. When you notice an information gap (who attended, what was decided, who owns actions), you should provide the facts or have the model ask clarifying questions before generating an artifact.
Hallucination is when a model generates content that appears coherent and confident but is not grounded in provided or verifiable information. In Scrum contexts, this often shows up when you request outputs like meeting notes, summaries, or decisions without supplying key facts (attendees, outcomes, dates, owners, metrics). A responsible response is to recognize the missing inputs and either provide them or prompt the model to ask targeted clarifying questions and to label assumptions explicitly. This keeps human accountability and transparency intact and reduces the risk of publishing false information as if it were agreed by the Scrum Team and stakeholders.
It is the model fabricating plausible-sounding details when it lacks necessary grounding facts.
Topic: Effective AI Prompting
A Product Owner uses a generative AI tool to draft acceptance criteria for several Product Backlog items and pastes them into the Product Backlog. In Sprint Planning, Developers push back that many criteria are vague or not testable, and the Product Owner says, “But the AI already wrote them clearly.” The discussion becomes tense and the team loses time rewriting criteria.
Exhibit: Prompt used
Generate acceptance criteria for PBI: "As a user, export my report to PDF".
Use Given/When/Then and keep it short.
What is the most likely underlying cause?
Best answer: D
What this tests: Effective AI Prompting
Explanation: The core problem is that AI-drafted acceptance criteria were treated as “done” without being collaboratively validated by the Product Owner and Developers. Acceptance criteria need shared understanding of intent and testability before they are relied on in Sprint Planning. A lightweight review with examples and checks against the Product Backlog item would prevent friction and rework.
AI can accelerate drafting acceptance criteria, but accountability for correctness and shared understanding remains with the Scrum Team. In this scenario, the criteria were copied into the Product Backlog and effectively treated as ready, yet Developers later found them vague and untestable—signaling the team skipped (or minimized) a joint validation step.
A practical validation approach with the Product Owner and Developers is to:
The key is collaborative review and refinement, not accepting AI text as authoritative.
The criteria were adopted without a joint check for intent, clarity, and testability with both PO and Developers.
Topic: Effective AI Prompting
A Scrum Master will use a generative AI tool to draft a short message announcing the latest Increment to external customers. The message will be reviewed by the Product Owner before sending, but it must avoid internal jargon and must not imply any future delivery commitments.
Which prompt provides the best context to reduce ambiguity for this high-visibility audience?
Best answer: B
What this tests: Effective AI Prompting
Explanation: The most effective way to reduce ambiguity is to give the AI explicit context about who the message is for, what “good” looks like, and what must be avoided. For an external, high-visibility announcement, audience and constraints (no Scrum jargon, no future commitments) are decisive. Adding a brief definition for potentially unclear terms further steers the output toward the intended meaning.
Prompt ambiguity drops when you provide just enough context for the AI to make the same trade-offs a human would. In an external customer announcement, the AI needs to know the target audience and communication constraints so it doesn’t default to internal jargon or accidentally imply commitments.
Useful context to include here:
This focuses the AI on the desired communication outcome while leaving the Product Owner accountable for final approval.
It specifies audience, tone, length, required definitions, and clear content constraints to prevent jargon and accidental commitments.
Topic: Effective AI Prompting
A Scrum Team starts using generative AI to draft test cases from Product Backlog Items during Sprint Planning and refinement. After three Sprints, the Scrum Master is asked to show whether this AI use is improving results and that the AI-generated test cases are reliable enough to use.
Which evidence/validation step is best?
Best answer: B
What this tests: Effective AI Prompting
Explanation: Use outcome-based measures with a baseline (for example cycle time and escaped defects) and validate a representative sample of AI-generated test cases through peer review or execution. This provides evidence that results improved and that the AI output is trustworthy in practice, rather than measuring how much the team used AI.
To show AI is improving outcomes (not activity), you need evidence tied to delivery results and a validation mechanism for the AI output. In this scenario, the AI produces test cases, so reliability is best validated by human review and/or running those tests, while improvement is measured by changes in delivery and quality metrics versus a pre-AI baseline.
A strong approach is:
This connects AI use to observable improvements and keeps accountability with the Scrum Team rather than the model.
It validates the output with human review while measuring outcome changes against a baseline instead of counting AI activity.
Use the PSM-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PSM-AI guide on PMExams.com, then return to PM Mastery for timed practice.