Try 10 focused PSM-AI questions on AI Theory and Primer, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PSM-AI |
| Topic area | AI Theory and Primer |
| Blueprint weight | 25% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate AI Theory and Primer for PSM-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: AI Theory and Primer
When an AI assistant produces an uncertain or low-confidence response, which term best describes the responsible fallback of involving a person to ask clarifying questions, consider multiple options, and make the final decision (including deferring to a subject matter expert when needed)?
Best answer: B
What this tests: AI Theory and Primer
Explanation: Human-in-the-loop means a person intentionally reviews, validates, and decides rather than blindly accepting AI output. It is the safe fallback when AI is uncertain because it encourages clarifying questions, weighing alternatives, and escalating to expertise when appropriate.
Human-in-the-loop is a responsible-use pattern where AI assists but a human remains accountable for critical judgments and for verifying outputs before they are used. In Scrum contexts, this is a practical fallback when AI responses are unclear, inconsistent, or lack sufficient context: the Scrum Master or another team member asks clarifying questions, requests alternative options, and—when the topic requires domain authority—defers to the appropriate subject matter expert. This reduces the risk of acting on unverified or misleading AI output and preserves transparency and empiricism by treating AI results as suggestions to validate, not decisions to follow.
It keeps humans accountable for validating and deciding when AI output is uncertain.
Topic: AI Theory and Primer
A Scrum Master wants to use a generative AI assistant and types: “Summarize our last Sprint and list the top risks for the next one.” They do not provide the Sprint Goal, what “risk” means in this context, which artifacts or notes the AI may use, or who the summary is for.
Which principle/practice best addresses what is missing in this request?
Best answer: A
What this tests: AI Theory and Primer
Explanation: The core issue is underspecification: the AI lacks the context needed to produce a useful, accurate Sprint summary and risk list. The best practice is to clarify intent and provide constraints such as audience, definitions, and permitted sources so the model can map the request to the right information and format. This improves relevance and reduces hallucination risk before any later review.
Generative AI is highly sensitive to the information and constraints you provide. When a request lacks key context (what “good” looks like, what inputs are allowed, and how the output will be used), the model must guess, which often produces generic or incorrect results. In this scenario, the most effective first move is to enrich the prompt with:
Privacy, transparency, and validation are still important practices, but they do not fix the immediate limitation: the request doesn’t give the AI enough to work with.
The request is underspecified, so providing goal, audience, authorized sources, and output constraints enables more reliable, relevant results.
Topic: AI Theory and Primer
Midway through a Sprint, Developers start using generative AI to produce most unit tests. Code reviews show some Developers cannot explain the assertions or edge cases, and a recent defect escaped because no one understood an AI-suggested test’s intent.
As Scrum Master, what is the best next step to address the problem while preserving team learning?
Best answer: D
What this tests: AI Theory and Primer
Explanation: The goal is to use AI in a way that accelerates delivery without hollowing out skills. A good next step is to make learning and accountability explicit: Developers define test intent and edge cases, use AI only as a drafting assistant, and then validate and explain the result. This preserves skill growth while still benefiting from AI speed.
AI use can reduce learning when it replaces the thinking work (e.g., choosing assertions and edge cases) and when outputs are accepted without understanding. In Scrum, Developers remain accountable for the quality of the Increment, so AI should be used as an aid, not a substitute for competence.
A practical next step is to agree a lightweight team workflow such as:
This keeps empiricism intact (inspect the actual results) while building capability rather than outsourcing it to the tool.
It keeps accountability with Developers by requiring intent, validating AI output, and making knowledge explicit through review and teach-back.
Topic: AI Theory and Primer
A Scrum Master wants to use an AI assistant to turn raw Sprint Review notes into an update. The notes are already classified as internal and approved for AI use. The request to the AI is simply: “Summarize these notes for sharing.” You already know the update must be readable in about 2 minutes and should cover outcomes, key decisions, risks, and next steps.
What is the best clarifying question to ask FIRST to reduce ambiguity in the AI’s output?
Best answer: A
What this tests: AI Theory and Primer
Explanation: When the content is approved and the topics are known, the main remaining ambiguity is how the information should be presented. Asking about the intended audience and specifying an output format (bullets/table/short paragraph) provides concrete constraints that strongly shape usefulness and readability. This is a high-leverage clarification before refining wording or detail level.
Selecting an explicit output format is a practical prompting technique to reduce ambiguity because it constrains structure, length, and scannability. In this scenario, data classification and permission to use AI are already handled, and the required content areas are known (outcomes, decisions, risks, next steps). What remains underspecified is how the update should be packaged for the recipients.
A good first clarification is to confirm:
Once format and audience are set, you can tune tone and level of detail without rework. The key takeaway is to lock down structure early to make AI output predictable and actionable.
Confirming audience and output format sets clear structure expectations and prevents mismatched, hard-to-use summaries.
Topic: AI Theory and Primer
A Product Owner asks you to “use AI to speed up our release decisions.” You see the following prompt drafted for the team’s AI assistant.
Exhibit (prompt snippet):
Role: Release decision-maker
Task: Read the Sprint test summary and the Definition of Done.
Question: Is the Increment releasable? Answer ONLY: YES or NO.
If NO, state who is responsible.
What is the best next action supported by the exhibit?
Best answer: B
What this tests: AI Theory and Primer
Explanation: The exhibit frames AI as the final decision-maker and asks it to assign responsibility, which replaces professional judgment. A safer framing uses AI to summarize evidence, surface uncertainties, and suggest options or follow-up checks. The release/quality decision and accountability remain with the humans (Scrum Team/PO), informed by validated facts.
The prompt is asking AI to make a binary go/no-go decision and to attribute responsibility. That substitutes AI output for professional judgment and can create false certainty, unfair attribution, and weak transparency.
A safer approach is to reframe the request so AI supports decision-making without owning it, for example:
The key takeaway is to use AI to generate decision inputs, then have accountable people validate and decide.
It keeps humans accountable by using AI to inform judgment (risks/options) rather than making a release decision or assigning blame.
Topic: AI Theory and Primer
A Scrum Team’s Sprint Retrospective was dominated by one strong opinion (“the only problem is slow code reviews”). You want to use a generative AI assistant to help you prepare facilitation notes that avoid amplifying groupthink.
Constraints: the output will be shared with the Scrum Team; keep a neutral tone; use anonymized inputs (no names/customer data); produce a structured output; and clearly indicate what requires human validation.
Which prompt is the best to use?
Best answer: B
What this tests: AI Theory and Primer
Explanation: The safest and most useful prompt counteracts groupthink by asking the AI to generate multiple plausible alternatives, including contrarian viewpoints, rather than converging on one answer. It also constrains confidentiality (anonymized inputs) and sets expectations for structured output. Finally, it requires assumptions and uncertainties so the Scrum Team can validate the output before acting on it.
Generative AI can amplify groupthink when the prompt asks for “the” root cause or a single best answer, because the model will often reinforce the most salient or dominant narrative in the input. A better approach is to prompt for divergence: multiple hypotheses, questions that surface different perspectives, and small experiments to test ideas.
A strong prompt in this scenario should:
This keeps accountability with the Scrum Team and supports empiricism by treating outputs as hypotheses to inspect and adapt.
It explicitly drives divergent alternatives and diverse perspectives while enforcing anonymization, structured output, and human validation.
Topic: AI Theory and Primer
A Scrum Team has started using generative AI during the Sprint to summarize support tickets and draft Product Backlog item descriptions. Developers are using several different public AI tools, and you notice a shared prompt that includes pasted excerpts from customer support tickets containing customer names and email addresses. There is no organizational AI policy yet, and the Sprint Review is in two days.
What is the best mitigation to apply immediately?
Best answer: A
What this tests: AI Theory and Primer
Explanation: The most urgent risk is data leakage from pasting personal customer data into public AI systems. The best immediate response is lightweight governance: clear do/don’t rules for sensitive data, an approved usage path, and expectations for sanitization and transparency. This mitigates harm quickly without halting the team’s ability to deliver and learn.
Early AI adoption often creates multiple risks at once (tool sprawl, inconsistent prompting, and data leakage). In this scenario, the decisive factor is sensitive data being shared (customer names/emails), which can create irreversible exposure if entered into an external system.
An effective immediate mitigation is to establish minimal, practical guardrails the team can follow right away:
This addresses the highest-impact risk first while still enabling empiricism and learning within the Sprint.
It reduces data leakage risk quickly by setting clear boundaries on sensitive data and limiting uncontrolled tool use.
Topic: AI Theory and Primer
A Scrum Master asks an AI assistant to draft a Sprint forecast summary and key risks using the last five Sprints. The AI output assumes the team has full capacity and stable scope, but the upcoming Sprint includes two developers on planned leave and a fixed-date integration constraint. Which practice best keeps the AI output aligned with reality by capturing assumptions and constraints?
Best answer: B
What this tests: AI Theory and Primer
Explanation: To keep AI outputs realistic, the key is to constrain the prompt with the true context and make assumptions explicit. Providing known constraints (like reduced capacity and fixed integration dates) and asking the AI to restate its assumptions creates something the Scrum Team can quickly verify and correct. This reduces “hidden” premises that can mislead planning.
The core practice is assumptions-and-constraints capture: AI often fills gaps with plausible defaults (for example, “full capacity” or “no external deadlines”) unless you explicitly provide constraints and require it to declare assumptions. For Scrum use, this supports empiricism by making the basis for a draft transparent and inspectable before it influences decisions.
A practical way to do this in prompting is:
This is different from simply validating after the fact; the goal is to prevent misalignment by surfacing premises up front.
Making constraints explicit and having the AI surface assumptions creates a reviewable basis to correct misalignment before using the output.
Topic: AI Theory and Primer
A Scrum Team wants to “make the AI write Product Backlog items in our style.” A Developer suggests fine-tuning a model on the last 18 months of PBIs and related comments. As Scrum Master, you want to steer them toward the right approach (prompting vs fine-tuning) responsibly.
What is the most important question to ask first?
Best answer: A
What this tests: AI Theory and Primer
Explanation: Prompting is usually the first approach because it is reversible, fast to iterate, and does not change the underlying model. Fine-tuning is a heavier intervention that should be justified by evidence that prompting cannot meet the need. Defining measurable success criteria first lets the team run a small evaluation and decide based on results.
Choosing between prompting and fine-tuning starts with clarity on what “good” looks like. Prompting (including few-shot examples, constraints, and iterative refinement) can often produce consistent PBIs without changing the model, making it the safest and quickest path to validate value. Fine-tuning changes the model’s behavior and typically requires a curated dataset, stronger governance, and more effort; it should be considered only when you can demonstrate that prompt-based approaches cannot reliably meet the required quality, consistency, or format.
Ask for measurable success criteria first so the team can:
Tool selection or training logistics come after proving the need.
Clear success criteria enables a quick, low-risk prompting experiment to determine whether fine-tuning is necessary.
Topic: AI Theory and Primer
During a Sprint Retrospective, a Scrum Master uses a generative AI assistant to analyze anonymized notes and produce improvement recommendations.
Exhibit: AI output (excerpt)
1) Reduce WIP by limiting work to 2 items per Developer
2) Skip Daily Scrum when no blockers are reported
3) Move unplanned work to a separate “support backlog”
The Scrum Team wants to act quickly. What is the best next step to apply a human-in-the-loop approach?
Best answer: D
What this tests: AI Theory and Primer
Explanation: Human-in-the-loop means the AI’s output is an input to human judgment, not a decision-maker. The team should review the recommendations, test them against Scrum and the team’s real evidence (metrics, observations, context), and then decide what to try. This preserves accountability and reduces the risk of acting on incorrect or harmful suggestions.
Human-in-the-loop is a responsible AI practice where people stay accountable for decisions and use AI outputs as recommendations or hypotheses. In this scenario, the AI produced suggestions that could conflict with Scrum (for example, changing event usage) or oversimplify a complex system (for example, uniform WIP limits).
A good next step is to validate with the Scrum Team:
Using AI can speed up synthesis, but the team must still inspect, adapt, and own the decision and its consequences.
Human-in-the-loop means humans review, verify, and remain accountable for decisions rather than executing AI recommendations automatically.
Use the PSM-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PSM-AI guide on PMExams.com, then return to PM Mastery for timed practice.