PSM-AI: Effective AI Prompting

Try 10 focused PSM-AI questions on Effective AI Prompting, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePSM-AI
Topic areaEffective AI Prompting
Blueprint weight25%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Effective AI Prompting for PSM-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Effective AI Prompting

A Scrum Team started using a generative AI assistant during refinement and to draft test-case ideas. After two Sprints, the Scrum Master reports success using metrics like “prompts per day” and “AI-generated items per Sprint.” However, cycle time has not improved and stakeholders are reporting more escaped defects.

What is the best next step to determine whether AI use is improving outcomes rather than just increasing activity?

  • A. Agree outcome measures, baseline them, and inspect trends each Sprint
  • B. Set a higher target for AI-generated backlog items per Sprint
  • C. Let an AI agent build the Sprint Backlog and remove manual review
  • D. Mandate a standard prompt template before discussing success measures

Best answer: A

What this tests: Effective AI Prompting

Explanation: Counting AI activity (prompts, AI-generated items) does not show whether delivery outcomes improved. The next step is to agree on a small set of outcome-based measures tied to the team’s goals and baseline them. Then the Scrum Team can inspect and adapt based on observed changes across Sprints while keeping humans accountable for results.

To measure whether AI is helping, treat its use as an experiment and evaluate it with outcome metrics that matter to stakeholders and product delivery. In this scenario, activity metrics are rising but cycle time and quality signals are worsening, so the team needs empiricism: define “better” in measurable terms, capture a baseline, and inspect changes over time.

A practical next step is:

  • Select a few outcome measures (e.g., cycle time, escaped defects/defect trends, stakeholder satisfaction)
  • Record a baseline from recent Sprints
  • Inspect results each Sprint and adjust how AI is used (and what gets validated by humans)

This keeps accountability with the Scrum Team and avoids optimizing for AI usage volume instead of outcomes.

Define and baseline outcome metrics (e.g., cycle time, escaped defects, stakeholder satisfaction) so AI use can be evaluated empirically over subsequent Sprints.


Question 2

Topic: Effective AI Prompting

A Scrum Master wants to help the Scrum Team learn faster from Sprint Review feedback. The notes include customer names, email addresses, and account IDs. The organization’s guidance says confidential data must not be sent to non-approved AI services.

The team asks you to “just paste the notes into an AI assistant and get the top themes before the Retrospective.” What is the best next step?

  • A. Ask the AI to write the Retrospective summary and send it to stakeholders to save time
  • B. Classify the data, remove/obfuscate identifiers or use an approved environment, then prompt for themes and validate with the team
  • C. Set up an autonomous agent to continuously ingest feedback notes and update a dashboard
  • D. Paste the notes into the AI assistant and ask for a prioritized list of themes

Best answer: B

What this tests: Effective AI Prompting

Explanation: Responsible AI use starts by clarifying the goal and checking data sensitivity before sharing anything with an AI system. Here, the notes contain confidential identifiers, so the next step is to sanitize the data or use an approved environment, then prompt with clear constraints. Finally, the team should validate the themes against the original source and their context.

To use AI ethically while improving outcomes, follow a simple workflow: clarify the goal, assess data sensitivity, then craft a constrained prompt and validate outputs with humans accountable for the decision. In this scenario, the immediate risk is disclosing confidential customer identifiers to a non-approved AI service, which is an avoidable shortcut.

A practical next-step sequence is:

  • Confirm what decision/learning the team needs (e.g., “themes to explore in the Retrospective”).
  • Ensure data is safe to share (redact/anonymize, minimize, or use an approved internal environment).
  • Prompt for summaries/themes with constraints (no fabrication, cite snippets or counts).
  • Validate the themes with the team against the original notes before using them.

Speed comes from preparation and validation, not from skipping privacy checks or delegating accountability to AI.

It addresses data sensitivity first, then uses AI with constraints and human validation to support learning without cutting ethical corners.


Question 3

Topic: Effective AI Prompting

A Scrum Master uses generative AI to summarize Sprint Planning notes into a message for the Scrum Team. The original notes contain several qualifiers such as “only if Security approves” and “assumes vendor API is available,” but the AI summary repeatedly presents the decisions as unconditional, causing confusion and rework.

Which is the most likely underlying cause?

  • A. A malicious instruction in the input overrode the summary request
  • B. The prompt did not require preserving assumptions, constraints, and caveats
  • C. The team relied on the summary instead of validating it
  • D. Sensitive information in the notes forced the model to omit details

Best answer: B

What this tests: Effective AI Prompting

Explanation: The core issue is that the prompt did not constrain the summary to retain conditionality. To keep caveats, the request should explicitly require a dedicated section for assumptions/constraints and prohibit turning conditional statements into definitive ones. Clear output structure and retention rules improve summary fidelity for nuanced decisions.

Summaries often “smooth out” uncertainty unless you explicitly tell the model to preserve it. In this scenario, the notes already contain qualifiers, but the prompt likely asked for a short summary without requiring conditional language, assumptions, dependencies, or risks to be retained.

A practical technique is to add prompt constraints such as:

  • Require a separate “Caveats/Constraints/Dependencies” section.
  • Instruct: “Do not convert conditional statements into unconditional commitments.”
  • Ask to quote or reference the exact qualifier wording for each decision.

This targets the failure mode directly and reduces confusion while keeping humans accountable for final decisions.

Without explicit instructions and structure to retain qualifiers, the model tends to compress nuance into definitive statements.


Question 4

Topic: Effective AI Prompting

A Scrum Master drafts a prompt for an AI assistant: “You are a strict compliance auditor. Review our Sprint artifacts and tell us what Scrum events and controls we must add.”

The AI output recommends adding a weekly status meeting and requiring Product Owner sign-off on each task.

What is the best evidence/validation step to decide whether to trust this output, given that role/persona prompting can introduce bias?

  • A. Adopt the recommendations because an “auditor” persona is expert
  • B. Re-prompt without the persona; verify differences against the Scrum Guide
  • C. Ask the AI to explain its reasoning in more detail
  • D. Share the output and see whether stakeholders agree with it

Best answer: B

What this tests: Effective AI Prompting

Explanation: Persona prompting can steer an AI toward a particular worldview (e.g., adding controls), so you need validation that separates bias from facts. Re-running the prompt without the persona and then checking any claims against the Scrum Guide provides independent evidence and an authoritative reference before acting.

Role/persona prompting is helpful for tone, structure, and perspective, but it can also bias content by overemphasizing the persona’s goals (like compliance and control). In this scenario, the recommendations sound plausible yet may conflict with Scrum’s minimal events and the Scrum Team’s autonomy.

A strong validation approach is to remove the persona and compare outputs to identify persona-driven additions, then validate any remaining claims by cross-checking with the Scrum Guide (and, where applicable, the team’s working agreements). This uses independent comparison plus an authoritative source, rather than relying on the AI to justify itself.

The key is to treat persona-shaped outputs as hypotheses until verified.

Comparing a neutral prompt to the persona-based output and cross-checking against an authoritative source helps detect persona-driven bias and validate accuracy.


Question 5

Topic: Effective AI Prompting

You are a Scrum Master and have 10 minutes before Sprint Planning. The Product Owner asks for a quick plan to refine three unclear Product Backlog Items with the Developers later today, but you want to avoid introducing extra process or new meetings.

Which AI prompt is the best fit for this situation?

  • A. Create a detailed SOP for refinement with templates and roles
  • B. Produce an exhaustive backlog management checklist for a large enterprise
  • C. Draft a timeboxed 5-step refinement plan with decision points
  • D. List every possible refinement technique and when to use each

Best answer: C

What this tests: Effective AI Prompting

Explanation: With time pressure, the best prompt asks for a concise, timeboxed step-by-step plan that is directly usable today. Adding decision points keeps it practical without inventing new process. The goal is “just enough structure” to reduce uncertainty while preserving the Scrum Team’s autonomy and focus.

When asking an AI for a step-by-step plan, constrain it so the output is actionable but lightweight. In this scenario, the key discriminator is time pressure, so the prompt should force brevity and focus while still providing a clear sequence the Scrum Team can adapt.

Useful constraints to include are:

  • A small maximum number of steps (e.g., 5)
  • A timebox for each step
  • Explicit decision points or “stop conditions” (what to do if questions remain)
  • Scope limited to the immediate need (refine these three items today)

Prompts that ask for SOPs, exhaustive checklists, or comprehensive catalogs of techniques tend to create unnecessary process and increase reading/coordination overhead, which is the opposite of what you need right before Sprint Planning.

It requests a minimal, timeboxed sequence with only the essential steps and checkpoints, avoiding unnecessary ceremony under time pressure.


Question 6

Topic: Effective AI Prompting

A Scrum Team has started using generative AI to speed up Product Backlog refinement and Sprint Planning. Within two Sprints, each Developer is using different AI tools and ad-hoc prompts, creating inconsistent user story formats and duplicate work. A security guideline states that confidential customer data must not be entered into unapproved AI systems, and stakeholders want predictable, reviewable outputs. The team has 60 minutes available before the next refinement session and you want to preserve team autonomy.

What is the BEST next action for the Scrum Master?

  • A. Facilitate a short working session to agree on a small set of approved AI workflows (inputs allowed, prompt templates, validation steps) and capture them in a shared, lightweight playbook.
  • B. Select one AI tool for the whole team and require its use starting next Sprint.
  • C. Ask the Product Owner to enforce a single “correct” prompting style and reject backlog items created with other tools.
  • D. Let everyone keep their preferred tools, but ask them to paste AI outputs into the Product Backlog so stakeholders can review them.

Best answer: A

What this tests: Effective AI Prompting

Explanation: The core problem is tool sprawl causing inconsistent, risky, and non-repeatable results. A timeboxed, team-owned agreement on a small set of approved AI workflows (including what data can be used, prompt patterns, and how outputs are validated) standardizes the work without removing the team’s ability to adapt. This also supports transparency and predictable stakeholder outcomes.

Avoiding tool sprawl is mainly a workflow and governance problem, not a “pick the fanciest tool” problem. In this situation, the Scrum Master should help the team quickly converge on a small set of approved, repeatable AI workflows that fit the security constraint and produce consistent, reviewable outputs.

In the 60-minute session, aim to:

  • Identify 2–3 common AI use cases (e.g., refinement story drafts, acceptance criteria).
  • Define allowed inputs (no confidential customer data) and approved AI access paths.
  • Create shared prompt templates plus required human review/validation steps.
  • Store the workflows in a shared playbook the team can inspect and adapt.

This preserves team autonomy (the team chooses and evolves the workflows) while reducing duplication, risk, and inconsistency.

This reduces tool sprawl by standardizing a few approved, secure workflows while keeping the team accountable for validation and free to improve the playbook.


Question 7

Topic: Effective AI Prompting

A Scrum Master asks a generative AI model to draft Sprint Review notes but provides no attendee list, decisions made, or action items. The model produces confident, detailed notes that include specific decisions and owners that were never provided.

Which term best describes this behavior and signals you should supply or gather the missing facts?

  • A. Retrieval-augmented generation (RAG)
  • B. Hallucination
  • C. Prompt injection
  • D. Automation bias

Best answer: B

What this tests: Effective AI Prompting

Explanation: This is hallucination: the model fills in missing information with confident but unverified details. When you notice an information gap (who attended, what was decided, who owns actions), you should provide the facts or have the model ask clarifying questions before generating an artifact.

Hallucination is when a model generates content that appears coherent and confident but is not grounded in provided or verifiable information. In Scrum contexts, this often shows up when you request outputs like meeting notes, summaries, or decisions without supplying key facts (attendees, outcomes, dates, owners, metrics). A responsible response is to recognize the missing inputs and either provide them or prompt the model to ask targeted clarifying questions and to label assumptions explicitly. This keeps human accountability and transparency intact and reduces the risk of publishing false information as if it were agreed by the Scrum Team and stakeholders.

It is the model fabricating plausible-sounding details when it lacks necessary grounding facts.


Question 8

Topic: Effective AI Prompting

A Product Owner uses a generative AI tool to draft acceptance criteria for several Product Backlog items and pastes them into the Product Backlog. In Sprint Planning, Developers push back that many criteria are vague or not testable, and the Product Owner says, “But the AI already wrote them clearly.” The discussion becomes tense and the team loses time rewriting criteria.

Exhibit: Prompt used

Generate acceptance criteria for PBI: "As a user, export my report to PDF".
Use Given/When/Then and keep it short.

What is the most likely underlying cause?

  • A. Prompt injection altered the acceptance criteria output
  • B. Missing Product Backlog context and examples in the prompt
  • C. Sensitive data was inappropriately shared with the AI system
  • D. Weak validation with Product Owner and Developers before use

Best answer: D

What this tests: Effective AI Prompting

Explanation: The core problem is that AI-drafted acceptance criteria were treated as “done” without being collaboratively validated by the Product Owner and Developers. Acceptance criteria need shared understanding of intent and testability before they are relied on in Sprint Planning. A lightweight review with examples and checks against the Product Backlog item would prevent friction and rework.

AI can accelerate drafting acceptance criteria, but accountability for correctness and shared understanding remains with the Scrum Team. In this scenario, the criteria were copied into the Product Backlog and effectively treated as ready, yet Developers later found them vague and untestable—signaling the team skipped (or minimized) a joint validation step.

A practical validation approach with the Product Owner and Developers is to:

  • Walk through each AI-generated criterion against the Product Backlog item’s intent
  • Add concrete examples/edge cases and clarify ambiguous terms
  • Confirm testability and alignment with the Definition of Done
  • Keep only what the Scrum Team agrees is useful; discard the rest

The key is collaborative review and refinement, not accepting AI text as authoritative.

The criteria were adopted without a joint check for intent, clarity, and testability with both PO and Developers.


Question 9

Topic: Effective AI Prompting

A Scrum Master will use a generative AI tool to draft a short message announcing the latest Increment to external customers. The message will be reviewed by the Product Owner before sending, but it must avoid internal jargon and must not imply any future delivery commitments.

Which prompt provides the best context to reduce ambiguity for this high-visibility audience?

  • A. Summarize what the team built this Sprint and make it sound exciting for everyone.
  • B. Draft a 120–150 word customer announcement. Audience: non-technical customers. Use plain language and a confident, factual tone. Define “Increment” as “the latest released improvement” (do not use Scrum terms). Constraints: include only what is released this Sprint; do not mention future plans, dates, or “roadmap”; avoid internal system names. Output: one paragraph plus 3 bullets of customer benefits.
  • C. Create a customer announcement based on our Sprint Backlog and Definition of Done; keep it accurate.
  • D. Write a detailed explanation of the Increment, including key technical design decisions and the next features planned for the next two Sprints.

Best answer: B

What this tests: Effective AI Prompting

Explanation: The most effective way to reduce ambiguity is to give the AI explicit context about who the message is for, what “good” looks like, and what must be avoided. For an external, high-visibility announcement, audience and constraints (no Scrum jargon, no future commitments) are decisive. Adding a brief definition for potentially unclear terms further steers the output toward the intended meaning.

Prompt ambiguity drops when you provide just enough context for the AI to make the same trade-offs a human would. In an external customer announcement, the AI needs to know the target audience and communication constraints so it doesn’t default to internal jargon or accidentally imply commitments.

Useful context to include here:

  • Audience and tone (non-technical customers, plain language)
  • Output constraints (length, format, what topics are forbidden)
  • Definitions for ambiguous terms (replace Scrum terms with customer wording)
  • Scope boundaries (only what is released this Sprint)

This focuses the AI on the desired communication outcome while leaving the Product Owner accountable for final approval.

It specifies audience, tone, length, required definitions, and clear content constraints to prevent jargon and accidental commitments.


Question 10

Topic: Effective AI Prompting

A Scrum Team starts using generative AI to draft test cases from Product Backlog Items during Sprint Planning and refinement. After three Sprints, the Scrum Master is asked to show whether this AI use is improving results and that the AI-generated test cases are reliable enough to use.

Which evidence/validation step is best?

  • A. Count AI-generated test cases per Sprint
  • B. Compare baseline cycle time/defects and sample-review AI test cases
  • C. Track hours spent prompting and number of prompts saved
  • D. Ask the AI to self-assess the accuracy of its test cases

Best answer: B

What this tests: Effective AI Prompting

Explanation: Use outcome-based measures with a baseline (for example cycle time and escaped defects) and validate a representative sample of AI-generated test cases through peer review or execution. This provides evidence that results improved and that the AI output is trustworthy in practice, rather than measuring how much the team used AI.

To show AI is improving outcomes (not activity), you need evidence tied to delivery results and a validation mechanism for the AI output. In this scenario, the AI produces test cases, so reliability is best validated by human review and/or running those tests, while improvement is measured by changes in delivery and quality metrics versus a pre-AI baseline.

A strong approach is:

  • Establish baseline metrics (cycle time, escaped defects, rework) from earlier Sprints.
  • Randomly sample AI-drafted test cases and have Developers peer review and/or execute them.
  • Compare outcome metrics over multiple Sprints and inspect for unintended trade-offs.

This connects AI use to observable improvements and keeps accountability with the Scrum Team rather than the model.

It validates the output with human review while measuring outcome changes against a baseline instead of counting AI activity.

Continue with full practice

Use the PSM-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PSM-AI guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026