Practice PMI-ACP with free sample questions, timed mock exams, and detailed explanations in PM Mastery.
PMI-ACP tests whether you can apply agile thinking in real delivery situations, not just recite Scrum vocabulary. If you are searching for PMI-ACP sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same PM Mastery account.
Start a practice session for PMI Agile Certified Practitioner (PMI-ACP) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.
Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same PM Mastery account you use on web. The same PM Mastery subscription works across web and mobile.
Free diagnostic: Try the 120-question PMI-ACP full-length practice exam before subscribing.
Official source check: Last checked May 5, 2026 against PMI's public PMI-ACP certification page.
PMI's public page lists 120 questions and 180 minutes for PMI-ACP. Confirm current appointment rules, eligibility, and exam-language availability directly with PMI before booking.
PMI-ACP usually rewards the option that gets faster feedback, protects flow, keeps work visible, and improves value delivery without adding unnecessary process friction.
| Domain | Weight | Target scored items |
|---|---|---|
| Mindset | 28% | 28 |
| Leadership | 25% | 25 |
| Product | 19% | 19 |
| Delivery | 28% | 28 |
Use these filters when two agile answers both sound reasonable. PMI-ACP usually rewards the choice that improves feedback, flow, transparency, empowerment, and value delivery without adding unnecessary process weight.
| Scenario signal | First check | Strong answer usually… | Weak answer usually… |
|---|---|---|---|
| Stakeholders disagree about direction | What feedback, product goal, or value signal is missing? | Makes the work visible, validates assumptions, and uses feedback to reorder or refine. | Escalates immediately or locks scope without learning. |
| The team is blocked or overloaded | What is limiting flow: WIP, dependency, unclear priority, defect load, or interruption? | Removes the impediment, limits WIP, clarifies priority, and protects sustainable pace. | Adds status meetings or asks people to work harder. |
| Quality is slipping | What feedback loop failed: acceptance criteria, testing, review, definition of done, or integration? | Improves the system of work and makes quality visible earlier. | Blames individuals or adds a late inspection gate only. |
| A leader wants more control | What outcome are they trying to protect? | Coaches toward transparency, working agreements, decision rights, and empirical evidence. | Replaces team ownership with command-and-control task assignment. |
| A product idea looks attractive | What evidence proves customer value and priority? | Uses experiments, backlog refinement, slicing, and stakeholder feedback to test value. | Builds the full feature because a senior stakeholder requested it. |
Use this map after each practice run. The fastest improvement usually comes from naming the decision pattern behind a miss, not rereading the same agile terms.
| Domain | What the exam is really testing | What PM Mastery practice should force you to decide | Common wrong-answer trap |
|---|---|---|---|
| Mindset | Whether you can apply agile values under pressure | How to preserve feedback, collaboration, transparency, adaptation, and customer value | Choosing a process-heavy answer because it sounds controlled |
| Leadership | Whether you can serve the team and influence the system | When to coach, remove impediments, protect psychological safety, or enable self-organization | Solving the problem for the team instead of improving team capability |
| Product | Whether backlog and value decisions are evidence-driven | How to slice, prioritize, validate, and refine based on stakeholder and customer learning | Treating all stakeholder requests as equal requirements |
| Delivery | Whether the team can keep work flowing predictably | How to use WIP, boards, reviews, tests, retrospectives, and metrics to improve delivery | Optimizing local utilization while flow and quality degrade |
| Window | What to do | What not to do |
|---|---|---|
| Days 7-5 | Complete a mixed timed set or the full-length diagnostic, then classify misses by mindset, leadership, product, delivery, or timing. | Do not only count the score; write the agile principle or feedback loop you missed. |
| Days 4-3 | Drill the weakest domains, especially leadership and delivery if misses involve team dynamics, flow, quality, or impediments. | Do not reread generic agile vocabulary if the real weakness is choosing the best next action. |
| Days 2-1 | Review recurring traps: command-and-control, late quality gates, stakeholder wish lists, overloaded WIP, and process for its own sake. | Do not start a large new run if fatigue will make scenario reading sloppy. |
| Exam day | Identify who needs feedback, what value is at risk, and what system constraint blocks progress before choosing. | Do not choose the answer that merely adds reporting, meetings, or approvals. |
The goal is not to memorize every agile scenario. The goal is to build transferable decision judgment so a new prompt still leads you toward feedback, flow, transparency, team ownership, and value.
If you can complete several varied timed attempts at 75% or higher, explain why your missed answers violated an agile principle or weakened delivery flow, and consistently avoid command-and-control or process-heavy distractors, it is usually time to sit the exam rather than repeating questions you already recognize.
If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .
These are original PM Mastery practice questions aligned to PMI-ACP agile mindset, leadership, product, delivery, and continuous-improvement decisions. They are not PMI exam items, are not copied from any exam sponsor, and should be used to practice agile decision patterns rather than memorize exact wording. Use them to check your readiness here, then continue in PM Mastery with mixed sets, topic drills, and timed mocks.
Topic: Domain 2: Leadership
A delivery team has been operating in a command-and-control environment where a functional manager assigns tasks and publicly calls out individuals when defects occur. You coached the manager to stop individual blame, and you helped the team establish working agreements and run blameless retrospectives focused on system improvements.
Which evidence best validates that these undermining behaviors are decreasing?
Best answer: B
Explanation: To validate progress in addressing blame and command-and-control, use evidence that reflects team safety and empowerment, not output volume or meeting activity. A short, anonymous pulse on psychological safety and autonomy shows whether people feel safe to speak up, learn from failures, and self-manage-key outcomes of the actions taken.
When reducing command-and-control and blame, the most meaningful validation is whether team members feel safe to raise issues, take ownership, and make local decisions without fear of punishment. Those behaviors enable real agility: transparency, experimentation, and continuous improvement.
An anonymous psychological safety/autonomy pulse (tracked over several iterations) provides direct evidence of the desired cultural shift because it measures the team’s lived experience of safety and empowerment, which are prerequisites for blameless learning and self-management. The key takeaway is to prefer outcome-oriented evidence tied to mindset and behaviors over activity measures or delivery volume.
Topic: Domain 4: Delivery
A product team is split across three time zones and relies on an online Kanban board for coordination. People report surprises during handoffs and can’t tell what is truly in progress.
Exhibit: Board snapshot (today)
Columns: To Do | In Progress (WIP 3) | Review | Done
In Progress cards (5):
- ST-18 "API auth" (owner: ?) last update: 6 days
- ST-22 "UI error states" (owner: Mei) last update: 5 days
- ST-25 "Data mapping" (owner: ?) last update: 4 days
- ST-27 "Email templates" (owner: Sam) last update: 2 days
- ST-29 "Logging" (owner: ?) last update: 7 days
What is the best next action to improve shared visibility for this distributed team?
Best answer: D
Explanation: The exhibit shows the team’s primary visibility tool is stale and ambiguous (missing owners, old updates, and WIP exceeded). For a distributed team, the fastest way to restore shared understanding is to establish and follow lightweight, explicit habits that keep the board reflecting reality. A simple working agreement for ownership and daily updates enables reliable asynchronous handoffs.
Distributed teams depend on an accurate, shared “source of truth” to coordinate asynchronously. The exhibit indicates the board is not functioning as an information radiator: multiple cards lack owners, updates are several days old, and the in-progress column exceeds its WIP limit-so teammates cannot confidently infer current status or impediments.
A lightweight next step is to agree on and practice explicit board-use policies (a working agreement), such as:
This improves shared visibility without adding heavy process or tooling; additional meetings may help later but don’t fix the underlying transparency problem.
Topic: Domain 1: Mindset
A cross-functional team recently had a production incident. In the follow-up meeting, several team members were publicly blamed, and now the team avoids proposing new ideas. They only pick “safe” backlog items and reject experiments, even when outcomes are uncertain.
As the agile practitioner, what should you do to address fear of failure so experimentation and learning can resume? (Select TWO)
Best answers: A, C
Explanation: Fear of failure blocks experimentation because people optimize for self-protection rather than learning. A blameless retrospective with explicit working agreements restores psychological safety by shifting the conversation from who to blame to what to improve. Then, using small, timeboxed safe-to-fail experiments makes learning low-risk and repeatable.
To keep experimentation and learning possible, the team needs psychological safety plus a delivery approach that reduces the perceived personal and product risk of trying something new. A blameless retrospective sets a norm that problems are investigated as system and process opportunities, not personal faults, and working agreements make that norm explicit and enforceable. Safe-to-fail experiments (small scope, timeboxed, with a clear hypothesis and a quick learning review) reduce the cost of being wrong and reframe “failure” as validated learning.
Practical guardrails that help:
The goal is to make learning the expected output, not to eliminate all uncertainty.
Topic: Domain 4: Delivery
A product team ships a small increment each iteration. In the last sprint review, stakeholders said the new reporting screen was “not what we expected,” even though the team believes the story is complete. The story had a short description but no written acceptance criteria, and the team’s Definition of Done only lists technical steps (build, code review, deploy).
What is the best next step to reduce ambiguity and anchor feedback for upcoming work?
Best answer: A
Explanation: The fastest way to reduce recurring “not what we expected” feedback is to make expectations explicit before implementation. Acceptance criteria clarify what will be accepted for each item, and a Definition of Done clarifies what “complete” means across the team. Using both during backlog refinement anchors stakeholder feedback to agreed, testable conditions.
When stakeholders react late with “not what we expected,” the gap is usually shared understanding, not delivery speed. Acceptance criteria make each backlog item’s outcomes and boundaries testable, and the Definition of Done sets the team-wide quality/completion baseline. The best next step is to tighten both and immediately apply them in refinement with stakeholders so upcoming stories have clear, reviewable conditions before the team builds.
A practical sequence is:
This prevents rework by enabling earlier inspection and feedback against agreed criteria, rather than debating expectations after the fact.
Topic: Domain 4: Delivery
A product team received repeated stakeholder feedback that too many defects are being found during UAT, delaying approvals. In response, the team added automated regression tests and updated their Definition of Done to require tests and peer review before a story can be marked complete.
Which metric or evidence best validates that this process improvement is addressing the stakeholder feedback?
Best answer: B
Explanation: Because the problem came from stakeholder feedback about defects during UAT, the strongest validation is evidence that fewer stakeholder-found (escaped) defects are occurring over time. A trend tied to UAT findings shows whether the process change improved quality in the way stakeholders experience it. This creates a clear feedback loop from stakeholder pain to measurable outcome.
To use stakeholder feedback to drive continuous improvement, validate the change with an outcome measure that reflects the stakeholder’s experience. Here, stakeholders complained about defects discovered in UAT, so the most meaningful evidence is a decreasing trend in defects they find during UAT (often called escaped defects). That metric connects the improvement (stronger DoD and test automation) to the stakeholder-relevant result: smoother UAT and faster approvals.
Prefer evidence that is:
Activity outputs (like test count) can support the change, but they don’t prove the stakeholder problem is improving.
Topic: Domain 4: Delivery
Mid-sprint, you notice growing cycle time. During the daily stand-up you check the team’s Kanban board.
Exhibit: Board snapshot
WIP limits: Dev=4 Test=2 Deploy=1
Dev: 4 items (0 blocked)
Test: 3 items (2 blocked: "Test env down")
Deploy: 0 items
Note: blocked in Test for 2 days
What is the best next action to identify and address the emerging risk/impediment early?
Best answer: A
Explanation: The visual signal is clear: Test is over its WIP limit and two items have been blocked for multiple days, which is an impediment and a delivery risk. The best response is to use the team routine (stand-up) to focus on the blockage, swarm on removing it, and keep it visible until it is resolved.
Visual management is meant to surface flow problems early so the team can respond before they become missed-sprint or release risks. Here, the Test column exceeds its WIP limit and has multiple items blocked for two days due to an environment outage-this is both an impediment and a growing risk (queues, increased cycle time, delayed feedback). The best next action is to trigger a focused conversation in the stand-up and immediately swarm to remove the impediment (or escalate to the right owner) while keeping the blocker visible and tracked.
Key moves are:
Changing labels or limits hides the risk instead of identifying and resolving it.
Topic: Domain 4: Delivery
An agile team is building an “automated claim approval” capability. The rules are new and must be validated with the compliance officer and a few claims processors. The team uses 2-week iterations, has a WIP limit of 3 items, and their Definition of Done requires automated tests and audit logging for any releasable increment. The epic is forecast at 6 weeks.
As the agile practitioner, what is the BEST next action to reduce risk and increase learning?
Best answer: C
Explanation: Delivering a small, end-to-end increment is the fastest way to validate uncertain approval rules with compliance and users while keeping the quality bar intact. A thin vertical slice provides usable feedback in days/weeks instead of waiting for the full epic. It also respects the team’s flow policies by focusing on finishing work, not starting more.
The core delivery practice here is incremental delivery: reduce risk and increase learning by shipping or demonstrating small, releasable slices that produce real feedback. With uncertain business rules and compliance constraints, the team should avoid building large batches or component-only work that delays validation.
A good next step is to:
This approach accelerates feedback without trading away quality or violating WIP/flow agreements.
Topic: Domain 4: Delivery
An agile team reviews flow metrics to improve predictability.
Exhibit: Last 4 weeks (averages)
Lead time (Requested Done): 18 days
Cycle time (In Progress Done): 6 days
Time in Requested/Ready before start: 11 days
WIP now: Ready=14, In Progress=5, Test=2
Based on the exhibit, what is the best improvement action to try next?
Best answer: A
Explanation: Cycle time is relatively low at 6 days, but lead time is 18 days because items spend 11 days waiting before work starts. That points to a queueing/pull problem rather than an execution-speed problem. The best next experiment is to reduce waiting by limiting the Ready inventory and replenishing based on capacity.
Lead time includes both waiting and working time, while cycle time focuses on the time from start to finish. Here, cycle time (6 days) is much smaller than lead time (18 days), and the exhibit explicitly shows 11 days spent in Requested/Ready before work starts. That indicates the primary improvement opportunity is reducing queue time and improving flow into “In Progress,” not speeding up development.
A practical experiment is to:
The key takeaway is to target the largest component of lead time: pre-start waiting caused by excess inventory.
Topic: Domain 4: Delivery
A Kanban team runs retrospectives every two weeks, but most action items are forgotten once urgent work arrives. Cycle time variability is increasing, and leadership wants to see measurable improvement within a month. The team cannot add new recurring meetings and wants to keep changes lightweight.
What should the agile practitioner recommend to best build a culture of relentless improvement?
Best answer: C
Explanation: Relentless improvement comes from repeating small, measurable change cycles rather than relying on one-off discussions. By treating improvements as first-class work (visible on a backlog/board), funding them with explicit capacity, and running timeboxed experiments, the team can sustain learning without adding meetings. This also makes progress measurable within the month.
The core concept is establishing a repeatable improvement loop (e.g., PDCA/inspect-and-adapt) that survives day-to-day urgency. When retro action items stay “out of band,” they are the first to be dropped; making them part of the normal flow increases follow-through.
A practical loop that fits the constraints is:
This optimizes learning and flow while avoiding heavy upfront redesign or compliance-driven changes that reduce ownership.
Topic: Domain 1: Mindset
A cross-functional team says they want “more open discussions,” but decisions still go unchallenged. You are facilitating the next retrospective.
Exhibit: Retrospective notes (excerpt)
Keep: "Fast delivery when we agree."
Stop: "Debating the lead architect's approach."
Try: "Bring concerns 1:1 after the meeting."
Assumption heard: "Compliance means we must finish all docs before coding."
Outcome concern: "Cycle time is growing; rework is rising."
What is the best next action supported by the exhibit?
Best answer: A
Explanation: The notes show people are avoiding disagreement in meetings and holding an untested compliance assumption that may be driving worse outcomes. The best response is to create a safe, respectful way to challenge assumptions in the open and focus the team on measurable outcomes. A small experiment lets the team learn without making the discussion personal.
Psychological safety enables teams to challenge assumptions and debate ideas without fear of interpersonal consequences. The exhibit signals avoidance (“don’t debate,” “take it 1:1”) and an unvalidated belief about compliance that may be increasing cycle time and rework.
A good next step is to facilitate a respectful, outcome-focused conversation that:
This keeps focus on outcomes while normalizing healthy dissent in the team setting.
Topic: Domain 1: Mindset
A department VP asks you to “increase accountability” by publishing a weekly dashboard showing story points completed per developer. The team is worried the numbers will be used to blame individuals.
What is the FIRST clarifying question you should ask before agreeing on what metrics to radiate?
Best answer: B
Explanation: Start by clarifying the purpose and users of the metric so you can select information that supports learning and better delivery decisions. When the decision and audience are explicit, you can steer toward team/system outcomes and add needed context, which reduces the risk of using metrics to judge individuals. Transparency with intent supports improvement rather than blame.
Radiating metrics is valuable when it creates shared understanding and enables improvement, but metrics get weaponized when they are used to rank people or enforce “accountability” without clear decision intent. Before choosing what to publish, first clarify what decision the metric is supposed to enable and who will consume it. With that purpose defined, you can choose appropriate team or flow measures, add context (work type, constraints, quality), and set working agreements on how the data will and won’t be used. If leadership cannot articulate the decision, any dashboard is likely to become a vanity or compliance report and increase dysfunctional behavior.
Key takeaway: align metrics to decisions and learning, not individual performance judgments.
Topic: Domain 3: Product
A product team is asked to “improve the employee onboarding experience,” but stakeholders cannot agree on what the real pain points are. The team has little direct contact with new hires, and existing feedback is mostly anecdotal.
Which practice is the best match to reduce uncertainty before committing to a solution?
Best answer: A
Explanation: When the team is unsure what problem to solve, the fastest feedback comes from directly learning about users’ needs and pain points. Design thinking techniques (empathy, define, and ideation) help create a shared problem statement and success criteria before choosing features. This reduces the risk of building the wrong solution based on assumptions.
The core decision is whether uncertainty is primarily about the problem/needs or about the solution/market response. Here, stakeholders disagree on the pain points and evidence is anecdotal, so the biggest risk is misunderstanding users. Design thinking fits this stage because it uses direct user discovery and synthesis to clarify:
Lean startup is better once you have a clear hypothesis about a solution or value proposition and need to validate it quickly with experiments. The key takeaway: use design thinking to discover and frame the right problem; use lean startup to validate solution hypotheses.
Topic: Domain 2: Leadership
Two senior developers on an agile team repeatedly argue during planning and daily coordination about the “right” technical approach. Other team members have started taking sides, and work is stalling on a critical story needed to meet the iteration goal. As the team facilitator, what is the BEST next step to address this conflict in a timely way and prevent fragmentation and delays?
Best answer: D
Explanation: Addressing the conflict immediately with facilitated, direct dialogue prevents the team from polarizing and keeps work flowing toward the iteration goal. By surfacing underlying interests and constraints (not positions), the team can make a decision and capture agreements for how to collaborate going forward.
In agile teams, unresolved interpersonal or technical conflicts quickly become flow problems: people avoid collaboration, form sub-groups, and decisions stall. The best response is timely facilitation that brings the right people together to clarify the shared objective (the iteration goal), surface interests and constraints, and agree on a decision-making approach and next steps.
A practical facilitation sequence is:
This resolves the issue early, restores collaboration, and reduces the chance of ongoing delays versus postponing, escalating prematurely, or using indirect “keep the peace” coaching alone.
Topic: Domain 4: Delivery
Mid-sprint, the team’s Kanban board shows a sharp increase in items in “In Progress,” and several cards are flagged as blocked. Stakeholders are asking you to “fix the WIP problem” immediately.
What should you verify or ask first before deciding what action to take?
Best answer: B
Explanation: Rising WIP and blocked work are transparency signals that flow is constrained somewhere. Before choosing an intervention, first make the work visible at the right level of detail by identifying which items are blocked, how long they’ve been stuck, and the immediate causes. That information directs a focused action (e.g., swarm to unblock, adjust WIP, escalate a dependency) instead of guessing.
In an agile, transparency-first mindset, you respond to flow warnings (blocked cards, rising WIP) by getting fast clarity on the constraint before changing plans. The most useful first check is to pinpoint the blocked work and the blocker details so the team can address the real impediment rather than treating symptoms.
A practical first step is to confirm:
Once the constraint is understood, you can decide an immediate response such as swarming, escalating a dependency, tightening WIP, or clarifying acceptance criteria/definition of done. Prematurely changing scope, staffing, or frameworks skips the necessary diagnosis.
Topic: Domain 3: Product
An agile team is building a new self-service feature for a customer. Sprint Reviews are poorly attended by customer users, and feedback usually arrives weeks later via an account manager, leading to frequent rework.
Which TWO actions should the agile practitioner recommend to improve customer engagement and feedback timeliness? (Select TWO)
Best answers: A, C
Explanation: The core issue is a slow, indirect feedback loop caused by low customer participation and delayed, mediated input. Improve engagement by getting direct access to the right customer users/decision makers and creating frequent opportunities to review working increments. This increases transparency, accelerates learning, and reduces avoidable rework.
Insufficient customer engagement shows up as missed/low-attendance reviews, feedback arriving late through intermediaries, and rising rework because assumptions go untested. The most effective change is to shorten and strengthen the feedback loop by engaging real customer users (or empowered decision makers) directly and frequently.
Practical changes that address the problem in the scenario are:
Status reporting or filtering feedback through intermediaries increases distance from the customer and does not fix the delayed-learning problem.
Topic: Domain 2: Leadership
In a backlog refinement meeting, two senior engineers are stuck in a positional debate: “We must rewrite the service in a new framework” vs. “We must only patch the current code.” The discussion is getting personal and no one is exploring alternatives. As the agile facilitator, what is the BEST technique to shift the conversation into shared problem solving?
Best answer: C
Explanation: When people argue positions, the fastest path back to collaboration is to separate solutions from needs. Facilitate an interest-focused conversation that makes each person’s underlying concerns explicit, then align the group on a shared problem statement and success criteria. Once the problem is shared, the team can generate and evaluate options together.
The core move is reframing: shift from “my solution vs. your solution” to “what are we trying to achieve and what constraints are we balancing?” Ask each person to state the need behind their preferred approach (e.g., reliability, speed to market, maintainability), then capture common goals and disagreements as a neutral problem statement the whole team owns.
Practical facilitation steps:
This reduces defensiveness and creates a joint search for the best approach instead of a win/lose debate.
Topic: Domain 2: Leadership
A cross-functional agile team is experiencing recurring integration defects, and recent retrospectives have become circular debates with no follow-through. You are facilitating a 60-minute problem-solving session with the goal of leaving with a clear, realistic action plan the team will own.
Which facilitation approach SHOULD AVOID?
Best answer: B
Explanation: A problem-solving session that produces a usable action plan depends on shared understanding and team ownership. Having a single leader decide and assign work turns facilitation into command-and-control, which often reduces psychological safety and buy-in. The facilitator’s job is to guide the team to agree on causes and commit to specific next steps.
The core facilitation goal is to help the team converge from a messy problem into a small set of owned, testable actions. Effective facilitation creates shared clarity (what problem are we solving and how will we know it improved) and enables broad participation so the resulting plan is credible and supported. A facilitator should guide structure and flow, not make decisions for the team.
Practical elements that help the session end with a clear action plan include:
Having a lead unilaterally decide actions is the anti-pattern because it undermines ownership and learning.
Topic: Domain 3: Product
A team has spent the last three iterations building an “Advanced Analytics” feature. In the latest review, pilot customers said it’s “nice to have,” only 5% used the current increment, and the sales team reports the top deal blocker is a simple “Export to CSV” capability. The team can only complete one of these in the next iteration.
As the agile practitioner supporting the product owner, what is the best next step to stop investing in low-value work and redirect effort?
Best answer: A
Explanation: Agile value delivery prioritizes outcomes over sunk costs. The team has clear, recent evidence that Analytics is currently low value and that CSV export is the highest-value next bet. The best next step is to adapt by stopping further Analytics investment and reordering the product backlog accordingly.
The core concept is maximizing value by making pivot/stop decisions based on validated learning, not on effort already spent. The review feedback and usage data are strong signals that continuing to invest in “Advanced Analytics” has low near-term return, while “Export to CSV” directly addresses a top customer need and business outcome.
A practical next-step sequence is:
This redirects capacity quickly while keeping decision-making close to the product backlog rather than delaying for completion or escalating unnecessarily.
Topic: Domain 2: Leadership
An agile team has missed its iteration goal twice. Their workflow board shows growing work in progress and several stories bouncing back from testing due to rework.
In daily standups, two senior developers frequently talk over each other about “the right approach,” while the rest of the team stays quiet. After standup, decisions happen in private chats and two people sometimes implement competing solutions. The product owner says priorities have been stable.
What is the most likely underlying cause?
Best answer: B
Explanation: The symptoms point to interpersonal and decision-making breakdown more than a pure process or quality issue. The loud disagreement in public meetings, silence from others, and private side decisions indicate conflict avoidance and lack of timely facilitation. That fragmentation then manifests as duplicated work, increased WIP, and rework.
The core issue is unmanaged conflict: a visible technical disagreement is not being addressed in a shared, timely way, so it shifts into side channels and creates competing implementations. When conflict isn’t surfaced and resolved (e.g., with clear decision-making, facilitation, and working agreements), teams split into factions, collaboration drops, and delivery suffers through duplication, rework, and missed goals.
A timely approach would include:
Process and quality improvements can help, but they won’t fix the fragmentation if the underlying conflict remains unresolved.
Topic: Domain 3: Product
During a backlog refinement session, the product owner proposes a story:
“As a customer support agent, I want to see a customer’s current risk score on the profile so I can prioritize my responses.”
Stakeholders add, “It should be accurate and updated in real time,” but no further details are provided. What should you ask first to help refine testable acceptance criteria?
Best answer: C
Explanation: To make acceptance criteria testable, you first clarify ambiguous terms into observable, measurable conditions. “Accurate” and “real time” are undefined and can’t be verified without thresholds. Asking for specific latency and accuracy expectations enables clear criteria and shared understanding before design or estimation discussions.
Acceptance criteria should describe verifiable conditions of satisfaction. When a backlog item includes ambiguous terms (for example, “real time,” “fast,” “user-friendly,” or “accurate”), the most useful first step is to ask for measurable thresholds and boundaries so testers and stakeholders can objectively confirm whether the story is done.
For this story, clarify items such as:
Once these are defined, the team can express them as testable acceptance criteria and then proceed to sizing and delivery planning with less rework.
Topic: Domain 4: Delivery
A product team has missed the last two iteration goals. Their board shows more items “in progress” each day, and many stories return from testing with rework. Developers want time to refactor and add automated tests, but the product owner keeps prioritizing only new features because “customers can’t see technical work.” The team’s definition of done is “coded and demoed,” with testing and cleanup often deferred.
What is the most likely underlying cause?
Best answer: C
Explanation: The symptoms point to unsustainable delivery: work is started faster than it can be finished, and “done” allows deferring tests and cleanup. When maintainability is excluded from backlog decisions and the definition of done, technical debt grows and shows up as defects, rework, and missed iteration goals. The dev-PO conflict is consistent with misaligned value definition around product health.
This situation reflects a short-term feature-output bias that underfunds long-term product health. A weak definition of done (“coded and demoed”) and a prioritization approach that rejects refactoring/testing create a system where hidden work accumulates as technical debt and returns later as defects and rework. That rework increases unfinished work on the board, reduces throughput, and makes iteration goals unreliable.
To balance near-term delivery with maintainability, teams typically:
Flow controls or better estimation can help, but they won’t resolve the root cause if the product is allowed to ship with deferred quality work.
Topic: Domain 4: Delivery
Mid-sprint, a stakeholder reviews the latest increment and says, “This isn’t what we wanted-redo it.” The team’s story is marked “Done” on the board, but the feedback is vague and opinions differ about what “done” means.
What should the agile practitioner ask or verify FIRST to anchor the feedback and reduce ambiguity?
Best answer: D
Explanation: The fastest way to turn subjective feedback into actionable decisions is to compare the increment to the team’s explicit, pre-agreed expectations. Verifying the story’s acceptance criteria and the team’s Definition of Done establishes whether the work actually met “done” and what gap (if any) exists. That shared baseline reduces debate and prevents rework driven by shifting interpretation.
When a stakeholder says “not what we wanted,” the immediate risk is ambiguous feedback leading to unplanned churn. In agile delivery, acceptance criteria define what value and behavior the story must satisfy, and the Definition of Done defines the quality/completeness bar for calling work “done.” Verifying what was agreed (and whether it was met) creates a clear reference point to:
Only after anchoring the conversation in acceptance criteria and DoD should you discuss schedule trade-offs or process improvements. The key takeaway is to make feedback objective and testable before negotiating scope, timing, or commitments.
Topic: Domain 1: Mindset
Midway through an iteration, a team demos a thin slice of a new workflow to users and learns that a key assumption is wrong. The product owner wants to immediately swap in new high-value stories, but the team worries this will obscure the iteration goal and cause thrashing.
Which agile principle or practice best fits how the team should respond while keeping goals clear?
Best answer: D
Explanation: The situation calls for responding to feedback without losing alignment. Updating priorities using the new information is appropriate, but it should be done by reaffirming or adjusting the iteration goal so the team has a stable north star. This balances adaptability with clarity and reduces churn.
Agile teams embrace change by using fast feedback to adapt plans and priorities, but they do so in a way that preserves transparency and alignment. When new learning invalidates assumptions, the right response is to re-check the goal (the “why” for the near term) and then re-order work to maximize value against that goal. If the goal truly changes, make that explicit and re-plan accordingly; if it stands, use it as a filter to decide what to swap in or defer. The key is intentional re-planning based on evidence, not rigidly freezing a plan or constantly churning without a stable objective.
A good heuristic is: validate learning confirm/adjust goal re-prioritize communicate impact.
Topic: Domain 3: Product
A platform team handles a steady stream of small enhancements and production support requests. Work items vary in size, priorities can change daily, and the team does not plan in fixed-length iterations. Stakeholders want a simple way to validate delivery progress and quickly see where work is getting stuck.
Which visualization artifact best fits this workflow and validates progress?
Best answer: D
Explanation: For continuous, unplanned incoming work, progress is best validated by visualizing flow through defined states and making constraints explicit. A Kanban board with WIP limits shows what is in progress, what is blocked, and where queues are forming, enabling fast conversations about stuck work. This directly supports daily reprioritization without relying on timeboxed plans.
The core decision is whether the team’s workflow is timeboxed (iteration-based) or continuous flow. When priorities shift daily and work arrives unpredictably, a pull-based flow system is typically the best fit. A Kanban board makes progress visible by showing items moving through explicit states (for example, Ready, In Progress, Review, Done) and by using WIP limits to surface bottlenecks early. This provides strong evidence of progress and readiness because the board highlights queues, blocked work, and throughput over time without requiring a fixed sprint commitment. The closest alternative is an iteration task board, which is better evidence when work is planned and tracked inside a timebox.
Use this map after the sample questions to connect individual items to agile mindset, value delivery, stakeholder collaboration, team performance, adaptive planning, problem detection, and continuous improvement.
flowchart LR
S1["Agile project or team scenario"] --> S2
S2["Clarify value user and team context"] --> S3
S3["Inspect backlog flow risk and feedback"] --> S4
S4["Choose facilitation coaching or adaptation"] --> S5
S5["Deliver increment and learn"] --> S6
S6["Improve team system and outcomes"]
| Cue | What to remember |
|---|---|
| Mindset | Favor transparency, inspection, adaptation, collaboration, and incremental value. |
| Backlog | Refinement, ordering, slicing, and acceptance criteria guide delivery. |
| Team performance | Servant leadership, facilitation, impediment removal, and working agreements matter. |
| Feedback | Reviews, retrospectives, metrics, and user input guide adaptation. |
| Value | Agile answers prioritize validated value over completing low-value work. |
Use these child pages when you want focused PM Mastery practice before returning to mixed sets and timed mocks.