PSM-AI: AI Theory and Primer

Try 10 focused PSM-AI questions on AI Theory and Primer, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePSM-AI
Topic areaAI Theory and Primer
Blueprint weight25%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate AI Theory and Primer for PSM-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: AI Theory and Primer

When an AI assistant produces an uncertain or low-confidence response, which term best describes the responsible fallback of involving a person to ask clarifying questions, consider multiple options, and make the final decision (including deferring to a subject matter expert when needed)?

  • A. Hallucination
  • B. Human-in-the-loop
  • C. Retrieval-augmented generation (RAG)
  • D. Automation bias

Best answer: B

What this tests: AI Theory and Primer

Explanation: Human-in-the-loop means a person intentionally reviews, validates, and decides rather than blindly accepting AI output. It is the safe fallback when AI is uncertain because it encourages clarifying questions, weighing alternatives, and escalating to expertise when appropriate.

Human-in-the-loop is a responsible-use pattern where AI assists but a human remains accountable for critical judgments and for verifying outputs before they are used. In Scrum contexts, this is a practical fallback when AI responses are unclear, inconsistent, or lack sufficient context: the Scrum Master or another team member asks clarifying questions, requests alternative options, and—when the topic requires domain authority—defers to the appropriate subject matter expert. This reduces the risk of acting on unverified or misleading AI output and preserves transparency and empiricism by treating AI results as suggestions to validate, not decisions to follow.

It keeps humans accountable for validating and deciding when AI output is uncertain.


Question 2

Topic: AI Theory and Primer

A Scrum Master wants to use a generative AI assistant and types: “Summarize our last Sprint and list the top risks for the next one.” They do not provide the Sprint Goal, what “risk” means in this context, which artifacts or notes the AI may use, or who the summary is for.

Which principle/practice best addresses what is missing in this request?

  • A. Add purpose, audience, inputs, and constraints to the prompt
  • B. Disclose to stakeholders that AI helped create the summary
  • C. Remove sensitive data before sharing anything with the AI
  • D. Have the Scrum Team verify and correct the AI output

Best answer: A

What this tests: AI Theory and Primer

Explanation: The core issue is underspecification: the AI lacks the context needed to produce a useful, accurate Sprint summary and risk list. The best practice is to clarify intent and provide constraints such as audience, definitions, and permitted sources so the model can map the request to the right information and format. This improves relevance and reduces hallucination risk before any later review.

Generative AI is highly sensitive to the information and constraints you provide. When a request lacks key context (what “good” looks like, what inputs are allowed, and how the output will be used), the model must guess, which often produces generic or incorrect results. In this scenario, the most effective first move is to enrich the prompt with:

  • The purpose and target audience
  • Definitions (e.g., what “risk” means)
  • Approved sources (Sprint Goal, Sprint Backlog, metrics, Retro notes) and excluded sources
  • Output constraints (format, length, level of detail)

Privacy, transparency, and validation are still important practices, but they do not fix the immediate limitation: the request doesn’t give the AI enough to work with.

The request is underspecified, so providing goal, audience, authorized sources, and output constraints enables more reliable, relevant results.


Question 3

Topic: AI Theory and Primer

Midway through a Sprint, Developers start using generative AI to produce most unit tests. Code reviews show some Developers cannot explain the assertions or edge cases, and a recent defect escaped because no one understood an AI-suggested test’s intent.

As Scrum Master, what is the best next step to address the problem while preserving team learning?

  • A. Prohibit AI for tests until everyone can write them unaided
  • B. Have the Product Owner approve prompts and accept passing tests
  • C. Adopt an AI agent to generate and merge tests automatically
  • D. Facilitate a workflow: humans outline intent, AI drafts, then teach-back

Best answer: D

What this tests: AI Theory and Primer

Explanation: The goal is to use AI in a way that accelerates delivery without hollowing out skills. A good next step is to make learning and accountability explicit: Developers define test intent and edge cases, use AI only as a drafting assistant, and then validate and explain the result. This preserves skill growth while still benefiting from AI speed.

AI use can reduce learning when it replaces the thinking work (e.g., choosing assertions and edge cases) and when outputs are accepted without understanding. In Scrum, Developers remain accountable for the quality of the Increment, so AI should be used as an aid, not a substitute for competence.

A practical next step is to agree a lightweight team workflow such as:

  • Clarify the learning goal (understand intent and risks before drafting).
  • Have Developers write the test intent/edge cases first.
  • Use AI to draft candidate tests under clear constraints.
  • Validate via review, execution, and a brief teach-back.

This keeps empiricism intact (inspect the actual results) while building capability rather than outsourcing it to the tool.

It keeps accountability with Developers by requiring intent, validating AI output, and making knowledge explicit through review and teach-back.


Question 4

Topic: AI Theory and Primer

A Scrum Master wants to use an AI assistant to turn raw Sprint Review notes into an update. The notes are already classified as internal and approved for AI use. The request to the AI is simply: “Summarize these notes for sharing.” You already know the update must be readable in about 2 minutes and should cover outcomes, key decisions, risks, and next steps.

What is the best clarifying question to ask FIRST to reduce ambiguity in the AI’s output?

  • A. Who is the audience, and should it be bullets, table, or a short paragraph?
  • B. Can the summary include direct quotes from participants?
  • C. What sampling settings should the AI use for creativity?
  • D. Should the summary prioritize velocity trends from the last 3 Sprints?

Best answer: A

What this tests: AI Theory and Primer

Explanation: When the content is approved and the topics are known, the main remaining ambiguity is how the information should be presented. Asking about the intended audience and specifying an output format (bullets/table/short paragraph) provides concrete constraints that strongly shape usefulness and readability. This is a high-leverage clarification before refining wording or detail level.

Selecting an explicit output format is a practical prompting technique to reduce ambiguity because it constrains structure, length, and scannability. In this scenario, data classification and permission to use AI are already handled, and the required content areas are known (outcomes, decisions, risks, next steps). What remains underspecified is how the update should be packaged for the recipients.

A good first clarification is to confirm:

  • The intended audience (e.g., stakeholders vs. Scrum Team)
  • The preferred format (bullet list, table, or short paragraph) and any length/section constraints

Once format and audience are set, you can tune tone and level of detail without rework. The key takeaway is to lock down structure early to make AI output predictable and actionable.

Confirming audience and output format sets clear structure expectations and prevents mismatched, hard-to-use summaries.


Question 5

Topic: AI Theory and Primer

A Product Owner asks you to “use AI to speed up our release decisions.” You see the following prompt drafted for the team’s AI assistant.

Exhibit (prompt snippet):

Role: Release decision-maker
Task: Read the Sprint test summary and the Definition of Done.
Question: Is the Increment releasable? Answer ONLY: YES or NO.
If NO, state who is responsible.

What is the best next action supported by the exhibit?

  • A. Keep the prompt but add “be accurate”
  • B. Rewrite it to request risks, gaps, and options
  • C. Run the prompt and follow the YES/NO answer
  • D. Ask AI to approve the release on behalf of the PO

Best answer: B

What this tests: AI Theory and Primer

Explanation: The exhibit frames AI as the final decision-maker and asks it to assign responsibility, which replaces professional judgment. A safer framing uses AI to summarize evidence, surface uncertainties, and suggest options or follow-up checks. The release/quality decision and accountability remain with the humans (Scrum Team/PO), informed by validated facts.

The prompt is asking AI to make a binary go/no-go decision and to attribute responsibility. That substitutes AI output for professional judgment and can create false certainty, unfair attribution, and weak transparency.

A safer approach is to reframe the request so AI supports decision-making without owning it, for example:

  • Summarize the provided evidence against the Definition of Done.
  • List assumptions, unknowns, and missing verification.
  • Identify risks and suggest options (e.g., additional tests, mitigations, rollback plan).
  • Provide questions for the Developers/PO to answer before deciding.

The key takeaway is to use AI to generate decision inputs, then have accountable people validate and decide.

It keeps humans accountable by using AI to inform judgment (risks/options) rather than making a release decision or assigning blame.


Question 6

Topic: AI Theory and Primer

A Scrum Team’s Sprint Retrospective was dominated by one strong opinion (“the only problem is slow code reviews”). You want to use a generative AI assistant to help you prepare facilitation notes that avoid amplifying groupthink.

Constraints: the output will be shared with the Scrum Team; keep a neutral tone; use anonymized inputs (no names/customer data); produce a structured output; and clearly indicate what requires human validation.

Which prompt is the best to use?

  • A. I will paste the full retro transcript with participant names and customer account details; create a concise action plan and assign each action to the person responsible.
  • B. Using the anonymized retro notes I provide, create facilitation notes for the Scrum Team in a neutral tone. Generate 4–6 alternative explanations for the team’s challenges (including minority/contrarian perspectives) and 3–5 questions to help the team test them. List any assumptions you made and uncertainties. Output as a table with columns: Observation, Alternative hypotheses, Questions to ask, Safe-to-try experiments. Do not name individuals or include confidential details; flag items that require the team to validate.
  • C. Based on typical Scrum Team issues, infer what happened in the Sprint and produce a finalized improvement plan; do not include assumptions or caveats to keep it confident.
  • D. Summarize the retro notes and recommend the single most likely root cause with the top three improvements, written persuasively so the team aligns quickly.

Best answer: B

What this tests: AI Theory and Primer

Explanation: The safest and most useful prompt counteracts groupthink by asking the AI to generate multiple plausible alternatives, including contrarian viewpoints, rather than converging on one answer. It also constrains confidentiality (anonymized inputs) and sets expectations for structured output. Finally, it requires assumptions and uncertainties so the Scrum Team can validate the output before acting on it.

Generative AI can amplify groupthink when the prompt asks for “the” root cause or a single best answer, because the model will often reinforce the most salient or dominant narrative in the input. A better approach is to prompt for divergence: multiple hypotheses, questions that surface different perspectives, and small experiments to test ideas.

A strong prompt in this scenario should:

  • Request alternative interpretations (including minority/contrarian views)
  • Make assumptions/uncertainty explicit and require team validation
  • Protect confidentiality by using anonymized notes
  • Specify a clear, shareable format for facilitation

This keeps accountability with the Scrum Team and supports empiricism by treating outputs as hypotheses to inspect and adapt.

It explicitly drives divergent alternatives and diverse perspectives while enforcing anonymization, structured output, and human validation.


Question 7

Topic: AI Theory and Primer

A Scrum Team has started using generative AI during the Sprint to summarize support tickets and draft Product Backlog item descriptions. Developers are using several different public AI tools, and you notice a shared prompt that includes pasted excerpts from customer support tickets containing customer names and email addresses. There is no organizational AI policy yet, and the Sprint Review is in two days.

What is the best mitigation to apply immediately?

  • A. Introduce a simple AI-use guideline with data rules and an approved tool.
  • B. Stop all AI use until a complete, organization-wide policy is finalized.
  • C. Let everyone keep experimenting; standardize prompts after the Sprint.
  • D. Use AI outputs as the default basis for refinement decisions this Sprint.

Best answer: A

What this tests: AI Theory and Primer

Explanation: The most urgent risk is data leakage from pasting personal customer data into public AI systems. The best immediate response is lightweight governance: clear do/don’t rules for sensitive data, an approved usage path, and expectations for sanitization and transparency. This mitigates harm quickly without halting the team’s ability to deliver and learn.

Early AI adoption often creates multiple risks at once (tool sprawl, inconsistent prompting, and data leakage). In this scenario, the decisive factor is sensitive data being shared (customer names/emails), which can create irreversible exposure if entered into an external system.

An effective immediate mitigation is to establish minimal, practical guardrails the team can follow right away:

  • Define what data is prohibited (e.g., PII, credentials, customer content)
  • Require redaction/sanitization before using any AI
  • Limit usage to an approved tool/path while guidance matures
  • Make AI assistance transparent and keep humans accountable for decisions

This addresses the highest-impact risk first while still enabling empiricism and learning within the Sprint.

It reduces data leakage risk quickly by setting clear boundaries on sensitive data and limiting uncontrolled tool use.


Question 8

Topic: AI Theory and Primer

A Scrum Master asks an AI assistant to draft a Sprint forecast summary and key risks using the last five Sprints. The AI output assumes the team has full capacity and stable scope, but the upcoming Sprint includes two developers on planned leave and a fixed-date integration constraint. Which practice best keeps the AI output aligned with reality by capturing assumptions and constraints?

  • A. Remove all names and sensitive details before sharing any Sprint data with the AI
  • B. Add the known constraints and context to the prompt and require the AI to list its assumptions and uncertainties for team confirmation
  • C. Accept the draft only after the team compares it to actual capacity and Product Backlog changes
  • D. State in the forecast summary that an AI assistant helped generate the content

Best answer: B

What this tests: AI Theory and Primer

Explanation: To keep AI outputs realistic, the key is to constrain the prompt with the true context and make assumptions explicit. Providing known constraints (like reduced capacity and fixed integration dates) and asking the AI to restate its assumptions creates something the Scrum Team can quickly verify and correct. This reduces “hidden” premises that can mislead planning.

The core practice is assumptions-and-constraints capture: AI often fills gaps with plausible defaults (for example, “full capacity” or “no external deadlines”) unless you explicitly provide constraints and require it to declare assumptions. For Scrum use, this supports empiricism by making the basis for a draft transparent and inspectable before it influences decisions.

A practical way to do this in prompting is:

  • Provide relevant facts and constraints (capacity changes, fixed dates, definitions of “done/ready” if used).
  • Ask the AI to first list assumptions, uncertainties, and needed inputs.
  • Confirm or correct those items with the Scrum Team, then generate the draft.

This is different from simply validating after the fact; the goal is to prevent misalignment by surfacing premises up front.

Making constraints explicit and having the AI surface assumptions creates a reviewable basis to correct misalignment before using the output.


Question 9

Topic: AI Theory and Primer

A Scrum Team wants to “make the AI write Product Backlog items in our style.” A Developer suggests fine-tuning a model on the last 18 months of PBIs and related comments. As Scrum Master, you want to steer them toward the right approach (prompting vs fine-tuning) responsibly.

What is the most important question to ask first?

  • A. What measurable output criteria will define “works,” so we can test prompting first?
  • B. How many GPUs or compute hours are available for training this Sprint?
  • C. Which generative AI provider has the best fine-tuning features for Scrum artifacts?
  • D. How soon can we start fine-tuning to reduce the time spent writing PBIs?

Best answer: A

What this tests: AI Theory and Primer

Explanation: Prompting is usually the first approach because it is reversible, fast to iterate, and does not change the underlying model. Fine-tuning is a heavier intervention that should be justified by evidence that prompting cannot meet the need. Defining measurable success criteria first lets the team run a small evaluation and decide based on results.

Choosing between prompting and fine-tuning starts with clarity on what “good” looks like. Prompting (including few-shot examples, constraints, and iterative refinement) can often produce consistent PBIs without changing the model, making it the safest and quickest path to validate value. Fine-tuning changes the model’s behavior and typically requires a curated dataset, stronger governance, and more effort; it should be considered only when you can demonstrate that prompt-based approaches cannot reliably meet the required quality, consistency, or format.

Ask for measurable success criteria first so the team can:

  • Define what to evaluate (quality, consistency, error types)
  • Run a small prompting experiment on representative PBIs
  • Decide whether fine-tuning is warranted based on evidence

Tool selection or training logistics come after proving the need.

Clear success criteria enables a quick, low-risk prompting experiment to determine whether fine-tuning is necessary.


Question 10

Topic: AI Theory and Primer

During a Sprint Retrospective, a Scrum Master uses a generative AI assistant to analyze anonymized notes and produce improvement recommendations.

Exhibit: AI output (excerpt)

1) Reduce WIP by limiting work to 2 items per Developer
2) Skip Daily Scrum when no blockers are reported
3) Move unplanned work to a separate “support backlog”

The Scrum Team wants to act quickly. What is the best next step to apply a human-in-the-loop approach?

  • A. Adopt the recommendations immediately because the input notes were anonymized
  • B. Ask the AI to automatically update the Product Backlog and WIP limits
  • C. Send the recommendations to stakeholders as the team’s final improvement plan
  • D. Treat the output as a draft and validate it with the Scrum Team before deciding actions

Best answer: D

What this tests: AI Theory and Primer

Explanation: Human-in-the-loop means the AI’s output is an input to human judgment, not a decision-maker. The team should review the recommendations, test them against Scrum and the team’s real evidence (metrics, observations, context), and then decide what to try. This preserves accountability and reduces the risk of acting on incorrect or harmful suggestions.

Human-in-the-loop is a responsible AI practice where people stay accountable for decisions and use AI outputs as recommendations or hypotheses. In this scenario, the AI produced suggestions that could conflict with Scrum (for example, changing event usage) or oversimplify a complex system (for example, uniform WIP limits).

A good next step is to validate with the Scrum Team:

  • Review each recommendation and its assumptions
  • Compare with real evidence (flow metrics, Sprint results, team observations)
  • Decide what experiment (if any) to run and update the improvement plan

Using AI can speed up synthesis, but the team must still inspect, adapt, and own the decision and its consequences.

Human-in-the-loop means humans review, verify, and remain accountable for decisions rather than executing AI recommendations automatically.

Continue with full practice

Use the PSM-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PSM-AI guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026