Try 120 free PMI-ACP questions across the exam domains, with answers and explanations, then continue in PM Mastery.
This free full-length PMI-ACP practice exam includes 120 original PM Mastery questions across the exam domains.
The questions are original PM Mastery practice questions aligned to the exam outline. They are not official exam questions and are not copied from any exam sponsor.
Count note: this page uses a 120-question full-length practice format for PMI-ACP preparation. Always confirm final exam-day timing, appointment rules, and candidate instructions directly with PMI before your scheduled exam.
Set a 180-minute timer and answer the 120 questions before reading explanations. Track misses by agile decision area: mindset, leadership, product, delivery, or quality/flow.
Use this page as a diagnostic run, not as the only measure of readiness. The most useful result is not just the percentage score; it is the pattern behind the misses.
| Result pattern | What it usually means | Next step |
|---|---|---|
| Strong score and misses are scattered | Your broad readiness may be close. Review explanations, confirm timing, and avoid over-repeating recognized items. | |
| Strong score but repeated misses in one domain | The total score may hide a domain weakness. Drill that domain before another full-length run. | |
| Many mindset misses | Review agile values, psychological safety, empiricism, collaboration, and adapting to change. | |
| Many leadership or product misses | Focus on facilitation, stakeholder alignment, product discovery, value, and feedback loops. | |
| Many delivery misses | Review flow, WIP, quality, Definition of Done, release decisions, and continuous improvement. |
Use this worksheet immediately after the run, before you read too many explanations.
| Field | Record |
|---|---|
| Overall score | ___ / 120 questions |
| Timing result | Finished early / on time / rushed late |
| Highest-miss domain | Mindset / Leadership / Product / Delivery |
| Most expensive mistake type | Command-and-control choice / weak feedback loop / ignored value signal / poor flow decision / missed quality signal / other: ___ |
| Next focused page | Mindset / Leadership / Product / Delivery / another full mixed set |
For concept review before or after this set, use the PMI-ACP guide on PMExams.com.
This static page is useful for one full diagnostic pass. PM Mastery is the better place for repeated practice because it gives you varied attempts and progress history instead of one page you can memorize.
| Need after this diagnostic | Use PM Mastery for… |
|---|---|
| New mixed attempts | Timed mocks and mixed sets that reduce answer-recognition bias. |
| Domain repair | Focused mindset, leadership, product, and delivery drills. |
| Explanation review | Item-level explanations that help you classify mistake patterns. |
| Progress tracking | A single web/mobile account with practice history across sessions. |
| Final readiness checks | Varied timed attempts after weak domains have been repaired. |
For the cleanest diagnostic result, answer the questions under timed conditions before reading the explanations.
| Checkpoint | Approximate time budget | What to do |
|---|---|---|
| Questions 1-40 | 60 minutes | Build a steady rhythm and mark uncertain agile-decision items. |
| Questions 41-80 | 120 minutes cumulative | Watch for fatigue in leadership, product, and delivery trade-offs. |
| Questions 81-120 | 180 minutes cumulative | Finish with enough time to resolve marked items deliberately. |
If you retake this free diagnostic, treat the second attempt as a reasoning check, not as a fresh score. Some stems and answers will be familiar, so the percentage can overstate readiness.
For readiness decisions, give more weight to varied timed attempts in PM Mastery than to repeating one static page. Use this page to diagnose; use the app to build durable speed, coverage, and mixed agile judgment.
| Item | Detail |
|---|---|
| Issuer | PMI |
| Exam route | PMI-ACP |
| Official exam name | PMI Agile Certified Practitioner (PMI-ACP) |
| Full-length set on this page | 120 questions |
| Exam time | 180 minutes |
| Topic areas represented | 4 |
| Topic | Approximate official weight | Questions used |
|---|---|---|
| Mindset | 28% | 34 |
| Leadership | 25% | 30 |
| Product | 19% | 23 |
| Delivery | 28% | 33 |
Topic: Leadership
Midway through a product rebuild, the agile team is optimizing for reducing support tickets and improving stability, while senior stakeholders keep asking in reviews, “When will feature X ship?” and escalate when it isn’t on the next iteration plan. Both groups believe they are “prioritizing what matters.”
What is the best next step to correct this misalignment?
Best answer: D
What this tests: Leadership
Explanation: This situation signals a shared-vision gap: stakeholders and the team are optimizing for different definitions of value. The most effective corrective action is to bring both groups together to co-create a clear product goal and measurable outcomes, then use that agreement to drive backlog and roadmap decisions.
When stakeholders and teams talk past each other about “priority,” the root cause is often misalignment on purpose and how value will be measured. A leader should facilitate an explicit alignment conversation that produces a shared product goal (why), success measures (how we’ll know), and a transparent prioritization approach (what next). That shared direction lets the team balance feature delivery and stability work with stakeholder expectations, and it creates a reference point for trade-offs when new requests arrive.
Practical outputs can include:
The key is co-creation and agreement, not pushing process controls or local optimizations.
A joint vision/outcomes alignment session creates a single definition of value that can drive transparent prioritization and expectations.
Topic: Leadership
A cross-functional team is delivering an internal onboarding platform. The sponsor’s stated objective is to reduce new-hire time-to-productivity, but in recent reviews the team mostly reports story completion and velocity. After three iterations, adoption is up but time-to-productivity has not improved.
As the agile practitioner, which TWO actions best use objectives and success metrics to refocus on outcomes? (Select TWO)
Correct answers: A, B
What this tests: Leadership
Explanation: The team is optimizing for output signals, while the objective is an outcome change. Re-aligning on the objective and defining clear success metrics makes the intended impact explicit and measurable. Using those metrics in regular inspect-and-adapt events keeps prioritization and feedback centered on outcomes rather than throughput.
Outcome focus means turning a stated objective into shared, measurable success criteria and then using those measures to steer delivery. In this scenario, adoption increased but the target outcome (time-to-productivity) did not, which signals that the team may be building the wrong things or missing key impediments.
Two effective actions are:
Optimizing for velocity, freezing scope, or adding heavier controls can increase output but still miss the intended outcome.
Reconfirming the objective and agreeing on outcome metrics makes “success” observable and guides decisions.
Inspecting agreed outcome metrics regularly keeps work focused on impact, not just output.
Topic: Delivery
A product team has frequent delays because work is handed off between UI, API, and database specialists, and stories often sit in a “Blocked—dependency” state waiting on another group. The team reorganizes to deliver vertical slices, swarms to finish work, and brings needed skills into the team.
After four weeks, which evidence best validates that reducing handoffs and dependencies is increasing throughput?
Best answer: A
What this tests: Delivery
Explanation: Flow improvements from fewer handoffs and dependencies should show up as faster, smoother movement of work through the system and a higher completion rate. A cumulative flow diagram provides system-level evidence by visualizing WIP buildup, waiting/blocked states, and the rate of items reaching done over time. This directly validates whether throughput and flow are improving, not just whether activities occurred.
To validate progress on reducing handoffs and dependencies, look for evidence in end-to-end flow outcomes: less time waiting on others, less WIP aging in blocked states, and a steadier (or higher) rate of items completing. A cumulative flow diagram (CFD) is strong evidence because it makes queues visible; when dependencies are reduced, the “Blocked/Waiting” band should shrink and the “Done” band should grow more steeply over time (higher throughput), without uncontrolled WIP growth.
Activity outputs (training, checklists) can support the change but don’t prove that work is actually flowing faster through the delivery system.
A cumulative flow diagram can show reduced waiting/blocked work and increased completion rate, directly reflecting improved flow from fewer handoffs/dependencies.
Topic: Leadership
During a sprint planning meeting, the product owner repeatedly says the team is “too slow.” Developers become quiet, avoid eye contact, and give short answers. You notice tension rising and the conversation turning defensive.
As the agile practitioner facilitating, which TWO actions best apply emotional intelligence to recognize the team’s emotions and adjust communication? (Select TWO)
Correct answers: D, F
What this tests: Leadership
Explanation: Emotional intelligence starts with noticing and accurately naming emotional signals, then responding in a way that improves safety and understanding. Pausing to reflect observations and inviting concerns directly addresses the rising defensiveness. Active listening with neutral summaries adapts communication to reduce heat and move the group back to collaborative planning.
In a tense planning conversation, emotional intelligence means recognizing emotional cues (silence, avoidance, defensiveness) and adjusting your facilitation to restore psychological safety and clarity. A simple, effective response is to pause the meeting, share a neutral observation about what you’re noticing, and ask open-ended questions so people can voice concerns without blame. Then use active listening techniques—paraphrasing, validating impact, and summarizing in neutral language—to slow the pace, reduce misinterpretation, and confirm shared understanding.
These actions help the team re-engage and make better planning decisions because the conversation shifts from judgment to problem-solving. Tactics that suppress emotion, force a quick decision, or jump to escalation typically increase defensiveness and reduce openness.
Naming observable cues and inviting input acknowledges emotions and creates space for the team to speak safely.
Paraphrasing, checking understanding, and summarizing agreements adjusts communication to reduce defensiveness and restore collaboration.
Topic: Delivery
Mid-iteration, an external vendor informs the team that the API they integrate with will likely change within the next two weeks. This dependency is on the critical path for a release in three weeks, and the team already has several stories in progress.
What should the agile practitioner do next to best optimize value and flow while managing this newly updated risk?
Best answer: D
What this tests: Delivery
Explanation: New dependency information changes the team’s risk profile, so the best response is to make the risk explicit and adjust work immediately. A timeboxed spike plus pulling in decoupling work reduces uncertainty and preserves delivery flow without overreacting. This approach also enables ongoing monitoring as new information emerges.
In agile delivery, monitoring risks and impediments is continuous, and actions should change when new information increases likelihood/impact. Here, the vendor’s update creates near-term uncertainty on a critical dependency, so the team should quickly inspect-and-adapt the plan: make the risk transparent, learn fast with a timebox, and mitigate by reducing coupling so other work can continue.
A practical response is:
This protects value and flow better than delaying all work or adding broad, low-signal work that doesn’t reduce the uncertainty.
It makes the risk visible and immediately adapts the plan with the smallest timeboxed investment to reduce dependency impact and protect flow.
Topic: Product
A product team is building an internal platform. Executives keep asking the product owner for a weekly written status report.
Constraints:
What is the BEST next action?
Best answer: C
What this tests: Product
Explanation: Use information radiators to communicate progress through visual signals that are easy to consume and hard to misinterpret. Making the existing board reliably updated and accessible, then adding a simple burnup view, gives stakeholders progress toward the release goal and highlights blocked work. This satisfies transparency needs while respecting the team’s agreement to avoid interrupt-driven status work.
The core concept is replacing status-report “push” communication with information radiators—visual, always-available signals of progress and flow. In this scenario, stakeholders want release progress and visibility into stuck work, and the team wants to minimize interruptions and protect WIP limits.
A good next step is to make the electronic board trustworthy and stakeholder-visible (clear policies for updating, explicit blocked indicators), then add a lightweight visual summary such as a burnup (progress toward the release goal) derived from the board. This lets stakeholders self-serve the current state and enables better conversations focused on unblocking and outcomes rather than narrative reporting. The key is improving the quality and accessibility of the visual signals before adding more reporting ceremonies.
A visible, consistently updated board plus a burnup provides at-a-glance progress and flow signals without interrupting the team.
Topic: Product
A product team uses a Kanban board with explicit WIP limits agreed in their working agreement. This week, stakeholders are pressing to start a new high-value feature, but the board shows work piling up in Code Review (WIP limit 3; currently 6). Test is at its WIP limit and the team has recently found defects escaping to production.
What is the BEST next action to improve flow and expose the bottleneck while protecting quality?
Best answer: D
What this tests: Product
Explanation: A WIP-limit breach signals a flow constraint at that step, so the best move is to stop starting new work and focus the team on finishing and moving items through the constrained column. Swarming on the overloaded Code Review stage reduces queues, shortens cycle time, and makes the bottleneck and its causes easier to identify. This also supports quality by keeping the team focused on completing work through the quality gates rather than increasing in-flight inventory.
WIP limits are a pull-system control: when a column exceeds its limit, the team should treat it as a signal to stop starting and instead help the constrained activity. With Code Review at 6 against a limit of 3, the immediate bottleneck is review capacity/throughput, which is also likely contributing to downstream testing pressure and escaped defects.
A practical next step is:
Raising limits or starting more work increases queues and delays feedback, masking the bottleneck and often worsening quality.
Enforcing WIP limits by stopping starts and collaborating to reduce the overloaded step improves flow and makes the bottleneck visible without trading off quality.
Topic: Leadership
You are coaching an agile team that is missing its iteration goal. Review the following artifact.
Exhibit: Board snapshot (today)
WIP limit (In Progress): 5
In Progress (5): all assigned to Alex
Code Review (4): 3 cards tagged "Waiting for Alex"
Done (today): 1 card (Alex)
To Do (3): 2 cards unassigned
Team comment: "Alex is the go-to to save the sprint"
What is the best next action to promote collective ownership of outcomes?
Best answer: B
What this tests: Leadership
Explanation: The exhibit shows flow constrained by one person: all WIP is with Alex and reviews are waiting on Alex. The best action is to facilitate team-level ownership by swarming on the highest-value items and sharing review/testing responsibilities so progress is not dependent on a single “hero.”
Collective ownership means the team owns the outcome (meeting the iteration goal with quality), not an individual owning tasks or being the single path to “done.” The board shows an explicit bottleneck: work and reviews are queued behind Alex, while other work remains unassigned. The best next action is to coach the team to change how they collaborate so finishing work becomes a shared activity.
This builds resilience and throughput, whereas relying on a hero reinforces dependency and recurring delays.
It removes the Alex bottleneck by shifting work completion and quality activities to the whole team.
Topic: Product
During backlog refinement, the product owner asks the team to “get sizes on the top items” so they can plan the next iteration. You notice the team is using inconsistent approaches.
Exhibit: Backlog sizing notes (excerpt)
S-21 Reset password email Dev1: 3 pts | Dev2: 8 pts | QA: Small
S-22 Audit log export Dev1: 5 pts | Dev2: Medium
S-23 Migrate auth DB Dev1: Large | Dev2: 13 pts
S-24 New user onboarding Dev1: 2 days
What is the best next action?
Best answer: B
What this tests: Product
Explanation: The sizing notes show inconsistent units (points, t-shirt sizes, days) and disagreement on the same item. Relative estimation works best when the whole team aligns on a common scale anchored by reference stories. Using a collaborative method like planning poker helps the team quickly converge and produce comparable sizes across items.
Relative estimation is a team activity: the goal is consistent sizing across backlog items, not “perfect” precision. The exhibit shows three common breakdowns: mixed units (points, t-shirt sizes, time), lack of a shared baseline (no reference story), and high spread on the same story (3 vs 8). The best next step is to align on one relative approach and calibrate it.
A practical way to do this in refinement is:
This preserves collaborative ownership of estimates and makes the sizes comparable for planning and forecasting.
The exhibit shows mixed units and wide variance, so the team needs a shared relative baseline and a collaborative technique to converge on consistent sizes.
Topic: Leadership
A product team has been “doing agile” for 3 months: daily standups, a visible Kanban board, and newly introduced WIP limits to reduce long delays. Leadership wants to assess whether flow has actually improved enough to justify the next investment (automating deployment for more frequent releases).
Which metric/evidence would BEST validate the team’s progress and readiness for that next improvement?
Best answer: C
What this tests: Leadership
Explanation: To assess agile adoption maturity and pick the next improvement, the best evidence is outcome-based and tied to the constraint you’re addressing. Since the recent change targeted flow via WIP limits, cycle time (and its variability) is the most direct validation of real, sustained flow improvement and readiness to invest in faster release capability.
A maturity check should validate outcomes, not just that new practices exist. In this scenario, the team’s improvement goal is better flow (shorter, more predictable time to finish work) after introducing WIP limits. The most useful evidence is therefore a flow metric that reflects the end-to-end experience of getting work to “done” and whether the system is stabilizing.
A cycle time control chart (or equivalent cycle time trend/distribution) helps you see:
If flow becomes faster and more predictable, investing next in deployment automation is better justified because the delivery system can take advantage of more frequent releases rather than amplifying existing bottlenecks.
Cycle time and its variability directly demonstrate whether WIP limits improved flow and predictability enough to support more frequent releases.
Topic: Product
A product owner for an internal billing platform has 15 backlog items and three stakeholder groups pushing different “top priorities.” Several items have clear technical dependencies, and the team also wants to surface delivery risk (unknowns) alongside customer value. The product owner wants a transparent way to align stakeholders on what to do next.
Which practice is the BEST match?
Best answer: D
What this tests: Product
Explanation: A weighted scoring/relative ranking session makes prioritization explicit and collaborative by using agreed criteria. Because the decision must consider customer value, delivery risk, and item dependencies, scoring and then ranking items provides a defendable order and enables trade-offs to be discussed transparently. It also supports revisiting priorities as new information emerges.
When stakeholders disagree on priorities, a lightweight, transparent prioritization technique helps create shared understanding and commitment. A weighted scoring model (often followed by relative ranking) lets stakeholders agree on a small set of criteria—here, customer value, delivery risk/uncertainty, and dependencies—apply consistent scoring, and then order items based on the results while honoring prerequisite work. This approach both reveals why an item is “higher” and makes dependency and risk trade-offs visible (e.g., doing enabling work earlier to unlock high-value items or reducing risk before a major release). The key is facilitating stakeholder agreement on criteria and weights, not relying on the product owner’s opinion alone.
It explicitly combines customer value, risk, and dependency constraints to produce an agreed, ordered backlog.
Topic: Delivery
A product manager asks the team to “commit now” to delivering a fixed set of features in 4 sprints to meet a marketing date. The team’s last five sprint velocities varied from 25 to 35 points, and several backlog items are still being refined.
As the agile practitioner, what should you do to best balance transparency, flow, and realistic planning while using metrics appropriately?
Best answer: D
What this tests: Delivery
Explanation: Velocity and throughput are historical signals that support forecasting, not promises. With visible variability and ongoing refinement, the best approach is to communicate a forecast range with explicit assumptions and update it as the backlog and delivery data evolve. This preserves trust while protecting flow and quality.
Use velocity/throughput trends to create a transparent forecast, then keep it adaptive as uncertainty resolves. In this situation, the variation (25–35) and incomplete refinement make an exact scope commitment unreliable and likely to drive wasteful behavior.
A practical approach is to:
This optimizes decision-making and flow without incentivizing metric-gaming or quality erosion.
A range-based forecast uses velocity trends as probabilistic input and avoids turning historical metrics into a commitment.
Topic: Mindset
A cross-functional team is delivering an internal platform. Stakeholders say “surprises keep happening late,” and the team notes recurring defects and rework, but improvements are inconsistent. The agile practitioner wants a lightweight feedback loop that turns observations into actionable change without adding heavy overhead.
Which approach should the agile practitioner NOT recommend?
Best answer: B
What this tests: Mindset
Explanation: A lightweight feedback loop is fast, repeatable, and closes the loop by translating observations into an owned action and follow-up. Retrospectives, small metric reviews, and regular demos enable quick learning and adaptation. A quarterly audit with sign-offs creates a slow, heavyweight cycle that reduces responsiveness and team ownership.
To build transparency and enable continuous improvement, the feedback loop should be short, visible, and action-oriented: observe (data and stakeholder input), interpret together, decide one small improvement, and verify the impact soon. Practices like brief retrospectives with a small number of experiments, lightweight reviews of a few flow/quality indicators, and frequent demos all keep the loop tight and help the team turn insights into changes.
A heavy, infrequent audit process with leadership approvals stretches the cycle time of learning, discourages experimentation, and makes it harder for the team to adapt based on current evidence.
It delays learning and action with a heavyweight, approval-driven cycle rather than rapid, team-owned improvement.
Topic: Mindset
Your team says they are “agile,” but delivery is slowing and blame is increasing. Review the exhibit.
Kanban snapshot (today)
WIP limits: Dev=6, QA=3, Ready=∞
In Dev: 2
Dev Done (awaiting QA): 7
In QA: 3
Blocked (needs dev fixes): 4
Lead time trend: 6d → 11d (past 3 weeks)
Working agreement: "Developers finish a story when code is merged."
What is the best next action to correct what the exhibit suggests?
Best answer: C
What this tests: Mindset
Explanation: The board shows work piling up after development and bouncing back as fixes, while lead time worsens. The working agreement defines “done” as a developer activity, signaling role confusion and a handoff-driven flow. The best correction is to reset expectations so the whole team owns completing items to a single Definition of Done.
The core issue is collaboration breakdown caused by role-based handoffs: items are being labeled “done” at code merge, then queued for QA and frequently returned as defects, which increases waiting and rework (7 awaiting QA, 4 blocked, lead time rising). In an agile, cross-functional team, “done” should mean potentially shippable, not “done with my part.”
A practical correction is to re-align on a shared Definition of Done and working agreements so developers and testers collaborate to finish work end-to-end (e.g., pairing, swarming to pull items out of QA/blocked before starting new work). This reduces queues and finger-pointing by making completion a team responsibility.
The key takeaway is to remove the handoff gate by clarifying shared ownership and finishing flow.
The exhibit shows a role-based handoff (“done” at merge) creating queues, rework, and longer lead time, so the team needs shared ownership to finish work end-to-end.
Topic: Product
A product owner for an internal analytics platform reports “great progress” based on velocity and number of stories closed. However, customer support tickets show users still export data to spreadsheets, and leadership wants evidence that the last release improved customer outcomes.
Which metric is the BEST match to demonstrate customer value from the release?
Best answer: A
What this tests: Product
Explanation: To reflect customer value, use an outcome/adoption metric that indicates users are actually benefiting from the released capability. Tracking weekly active users of the new dashboard workflow directly measures whether behavior changed in the intended direction. Activity and internal quality metrics can be useful, but they don’t demonstrate customer outcomes by themselves.
Value-delivery metrics should connect delivered work to customer outcomes (e.g., adoption, retention, task success, reduced time-to-value, reduced support contacts). In this scenario, leadership is asking whether the release changed user behavior away from spreadsheets and toward the intended workflow. A metric that measures real usage of the new capability by the target users is a strong proxy for customer value because it reflects whether the increment is being adopted.
Internal activity metrics (velocity, stories completed) measure output, not value, and can improve even when customers see no benefit. Quality/process metrics (like test execution) are important enablers but still don’t prove the release improved outcomes; pair them with adoption/outcome measures when reporting value.
Adoption/usage of the specific capability shows whether the release is creating real customer value.
Topic: Product
Which agile term best describes a technique used to manage dependencies that could prevent delivering a usable increment by explicitly identifying, visualizing, and regularly updating cross-team/external dependency relationships and needed handoffs?
Best answer: B
What this tests: Product
Explanation: Dependency mapping is used to surface and track internal and external dependencies that could block completion of an increment. By making the relationships and handoffs visible, teams can take timely mitigation actions such as re-sequencing work, negotiating commitments, or reducing coupling. This directly supports delivering a potentially shippable increment despite dependency risk.
Dependency mapping is a lightweight way to identify and visualize who/what a backlog item depends on (other teams, vendors, shared components, environments) and what those parties need to provide. Keeping these dependencies visible and current enables proactive mitigation to protect delivery of the increment, such as adjusting sequencing, setting integration checkpoints, or redesigning work to reduce coupling.
The key idea is not just listing blockers, but making dependency relationships explicit so they can be managed before they become late-stage surprises that threaten the increment.
It makes dependency relationships explicit so teams can plan mitigation (e.g., sequencing, integration, or decoupling) to protect the increment.
Topic: Leadership
An agile team has missed its last three iteration goals because stories frequently carry over unfinished. In the retrospective, several causes are suggested (unclear acceptance criteria, too much work in progress, and late test involvement), but the same problem keeps returning.
As the agile practitioner, what is the best next step to identify the root cause the team can act on?
Best answer: B
What this tests: Leadership
Explanation: A recurring delivery symptom needs a lightweight root-cause technique that the team can apply immediately. Using 5 Whys on a concrete, recent example avoids speculation and helps the team converge on a single underlying cause they can address with an experiment. This supports inspection and adaptation within the retrospective cycle.
When a delivery problem recurs and the team has multiple competing theories, the next step is to perform focused root-cause analysis on a specific instance rather than jumping to broad fixes. 5 Whys is a simple facilitation technique that helps a team move from a visible symptom (carryover) to an underlying cause they can change (e.g., testing starts after development, stories too large, WIP too high).
A practical sequence is:
The key takeaway is to diagnose first with an RCA technique, then change the system with a targeted experiment instead of escalating or applying a blanket process change.
5 Whys helps the team drill down from symptoms to an actionable root cause for a specific recurring instance.
Topic: Mindset
A cross-functional team’s last two sprints ended with several carryover stories and rising escaped defects. In the retrospective, the discussion becomes a list of complaints about “too many interruptions,” and the same issues have appeared for three retrospectives. The team has 45 minutes left, and people are leaving without agreeing on owners for action items.
As the agile practitioner facilitating, what is the BEST next action to ensure the retrospective produces actionable improvements and clear ownership?
Best answer: B
What this tests: Mindset
Explanation: The key facilitation need is to converge from discussion to commitment. In limited time, the most effective move is to help the team pick one improvement they can control, make the action specific and testable, and explicitly assign ownership and timing. This creates follow-through and breaks the pattern of repeating the same retro topics without change.
Effective retrospectives balance exploration with convergence so the team leaves with a small set of improvements they will actually implement. When the conversation is repeating and devolves into venting, the facilitator should timebox the discussion and drive the group to select a single, high-impact experiment they can run in the next iteration (for example, a working agreement to limit interruptions or add quality checks). Make the action concrete (what will change, how success will be measured, and by when) and assign one accountable owner (with helpers if needed). This preserves psychological safety while ensuring clear ownership and enabling inspection and adaptation at the next retrospective.
Timeboxing to one high-impact action and making it SMART with a named owner converts discussion into an owned, executable improvement.
Topic: Delivery
After a beta release, users report that the signup flow is confusing and drop-off is high. Several executives also request unrelated “nice-to-have” features.
The product goal for the next 6 weeks is to reduce signup drop-off. The product owner wants evidence that the team is incorporating feedback into the backlog and plans without losing focus.
Which artifact/metric would best validate this?
Best answer: B
What this tests: Delivery
Explanation: The strongest validation combines feedback intake with transparent decisions about what changed in the backlog and why. Mapping feedback to specific backlog updates and explicitly tying prioritization to the product goal demonstrates learning and adaptation while preserving focus on the intended outcome.
To incorporate feedback without losing focus, the team needs transparent evidence that feedback is being turned into actionable backlog changes and that those changes are being filtered through the current product goal. A simple way to validate this is traceability from sprint review/user feedback to concrete backlog updates (new stories, revised acceptance criteria, re-ordering) along with lightweight rationale/decision notes showing why items were accepted, split, or deferred.
This validates both:
Pure activity counts or volume-based measures can increase while focus and value delivery degrade.
It shows feedback was evaluated, translated into backlog updates, and intentionally aligned to the signup drop-off goal.
Topic: Mindset
A product team is exploring a new “smart suggestions” feature, but user value and potential unintended effects are highly uncertain. Stakeholders want evidence quickly, and the team wants a safe-to-fail approach that limits downside while still producing learning.
Which TWO actions best fit a safe-to-fail experiment strategy? (Select TWO)
Correct answers: B, D
What this tests: Mindset
Explanation: When uncertainty is high, safe-to-fail experiments are designed to maximize learning while tightly limiting impact. The best choices create explicit learning goals and decision boundaries, then test with the smallest reversible change and controlled exposure. This produces evidence quickly without turning the unknowns into large, hard-to-undo commitments.
A safe-to-fail experiment is a small, timeboxed probe into uncertainty with guardrails that prevent outsized harm. In this scenario, the team needs fast evidence and must avoid broad exposure to unintended effects, so the experiment should be framed as a hypothesis test and executed with tight limits.
Practical safe-to-fail design elements include:
Planning and controls support learning; large releases and fixed commitments amplify risk and reduce adaptability. The key takeaway is to bound risk and make learning the primary deliverable.
Explicit hypotheses and bounded decision criteria keep the experiment focused on learning and limit uncontrolled risk.
A small blast radius, tight timebox, and reversibility make the experiment safe-to-fail while generating real feedback.
Topic: Leadership
A cross-functional agile team is two days from an iteration review when a major customer reports a high-severity issue in production. An executive asks you (the agile leader) to “guarantee a fix by the review” and wants hourly status updates. The team is tense and worried about blame.
Which action should you AVOID to lead by example with agile behaviors under this pressure?
Best answer: C
What this tests: Leadership
Explanation: Under pressure, agile leaders model calm, transparency, and servant leadership while enabling the team to self-organize. The behavior to avoid is reverting to command-and-control by making unilateral commitments and assigning work to individuals. That approach increases fear and reduces the team’s ability to inspect, adapt, and deliver the best outcome quickly.
Leading by example in agile means demonstrating the same behaviors you want the team to use when stress is high: transparency, collaboration, and adaptability. In this situation, the leader should help create clarity (what’s the goal, what’s the most valuable next step), enable fast replanning, and communicate honestly with stakeholders using evidence and uncertainty ranges.
A clear anti-pattern is switching to “hero management”: committing to outcomes without team input and directing individual tasks. That models blame avoidance and control, which typically reduces psychological safety and slows learning and delivery when change is happening fast.
Making unilateral commitments and assigning work top-down models command-and-control behavior and undermines team ownership and adaptability.
Topic: Mindset
A product team is building a new customer onboarding flow. After a few stakeholder conversations, the team wants to spend the next two sprints designing a “future-proof” architecture and optimizing performance for an expected 10x volume increase, even though no users have tried the flow yet.
As the agile practitioner, which action SHOULD AVOID to shorten feedback loops and reduce over-analysis?
Best answer: A
What this tests: Mindset
Explanation: Shortening feedback loops means prioritizing validated learning over speculative optimization. When users have not yet tried the flow, investing heavily in detailed design and performance tuning for projected scale delays discovering whether the solution works at all. The better approach is to timebox learning and deliver a small, testable increment.
The core mindset is to maximize learning as early as possible by shortening feedback loops. In this scenario, the biggest risk is not scale—it’s whether customers can complete onboarding and find value. Premature optimization (optimizing for a predicted future state before validating the current problem/solution) increases delay, raises the cost of change, and can lock in incorrect assumptions.
Practical ways to reduce over-analysis while still being responsible include:
Optimize and harden only after evidence shows the flow is valuable and constraints are real, not hypothetical.
This is premature optimization and big up-front design that delays learning from real user feedback.
Topic: Product
A product team released a new “saved payment method” capability to 10% of customers. The intended outcome is to reduce checkout abandonment and increase completed purchases without increasing payment-related defects.
Which metric/evidence best validates whether the increment is achieving the intended outcome?
Best answer: B
What this tests: Product
Explanation: Outcome validation requires evidence that the increment changes real user behavior in the desired direction, not just that work was completed. An A/B test comparing checkout completion and tracking a payment-defect guardrail ties the release to both value delivery and quality. This provides the strongest evidence that the increment is producing the intended results in production.
To validate that an increment is achieving intended outcomes, use evidence that links the change to customer behavior and business results, ideally with a comparison to a baseline or control. Here, the hypothesis is improved conversion (less abandonment) while maintaining payment quality. A controlled experiment (A/B test) with product analytics measures whether users exposed to the feature complete checkout at a higher rate than similar users who are not exposed, and pairing it with a defect/incident rate verifies the quality constraint.
Activity outputs (finishing stories) and proxy counts (page views) can be useful for tracking work and reach, but they do not demonstrate improved outcomes. Acceptance sign-off indicates the increment meets specified requirements, not that it delivers measurable value in real use.
It directly measures the targeted outcome (conversion) and the key quality guardrail (payment defects) using real user behavior.
Topic: Delivery
During sprint reviews, stakeholders give conflicting feedback on completed stories, saying the team “missed expectations” even though all tasks on the board are done. The team reports frequent rework because “done” means different things to different people.
Which practice is the BEST match to anchor feedback and reduce ambiguity going forward?
Best answer: A
What this tests: Delivery
Explanation: When feedback conflicts because expectations are implicit, the fastest way to reduce ambiguity is to make “what success looks like” explicit. Story-level acceptance criteria clarify the conditions of satisfaction, while a team-wide Definition of Done clarifies the quality/completeness bar. Using both as the basis for review and acceptance anchors feedback to objective, agreed-upon checks rather than opinions.
Use acceptance criteria and the Definition of Done as complementary “contracts” for delivery and feedback. Acceptance criteria describe the observable conditions that must be true for a specific story (what to validate), while the Definition of Done sets the shared minimum quality/completeness standard for all work (how complete/quality-checked it must be). In the scenario, tasks being done is not the same as the product increment meeting agreed expectations.
A practical approach is:
This reduces rework by preventing “moving goalposts” after development.
Acceptance criteria and a shared Definition of Done make expectations explicit and provide an objective basis for review feedback and completion checks.
Topic: Delivery
Mid-iteration, the team is blocked because the shared test environment keeps failing. Several possible fixes are proposed (temporary local environment, pairing with ops to stabilize, or switching to a different story). As the agile practitioner, you want to engage the whole team to quickly choose the best course of action and make the decision transparent. Which technique best matches this need?
Best answer: B
What this tests: Delivery
Explanation: Use a lightweight collaborative decision technique that lets the team propose alternatives and converge quickly. Brainstorming followed by dot voting creates fast alignment, makes the choice visible, and preserves team ownership of how to address the impediment.
When an impediment has multiple viable responses, the agile practitioner’s role is to facilitate team ownership of the decision, not to decide unilaterally. A short, structured collaboration like brainstorming to surface options and dot voting to converge enables quick selection while keeping the process transparent and inclusive. This approach also reduces debate cycles because the team can timebox the discussion, select an option, and then inspect results.
A practical flow is:
Root-cause analysis and flow policies are useful, but they don’t directly help the team choose among immediate response options in the moment.
It engages the whole team to generate options and rapidly converge on a visible decision.
Topic: Leadership
An agile team regularly identifies process improvements in retrospectives, but most action items are forgotten once delivery work starts. The team wants improvement work to be transparent and intentionally chosen, without relying on the Scrum Master to “chase” it.
Which practice is the BEST match to meet this goal?
Best answer: A
What this tests: Leadership
Explanation: Continuous improvement sticks when improvement work is treated as real work: visible, ordered, and intentionally selected. A prioritized improvement backlog (or Kaizen backlog) creates transparency and a simple decision mechanism to choose the next improvement. Reserving capacity and pulling items into planning turns retro insights into delivered changes.
To foster continuous improvement, teams need a lightweight way to make improvement work as visible and manageable as product work. A visible improvement backlog (often sourced from retro action items and operational pain points) allows the team to:
This directly addresses the stem’s problem—items are identified but not executed—by creating transparency and an explicit prioritization/pull mechanism, rather than relying on reminders or one-off discussions. The key takeaway is to operationalize improvement as prioritized backlog work, not as “extra” tasks.
A dedicated improvement backlog makes improvement work explicit, ordered, and selectable alongside delivery work so it actually gets done.
Topic: Delivery
Which Lean/Agile technique visualizes the end-to-end steps from customer request to delivery to reveal queues, handoffs, and wait time?
Best answer: A
What this tests: Delivery
Explanation: Value stream mapping is used to see the complete flow of work from start to finish, including processing time and wait time. By making queues, delays, and handoffs visible across the whole system, it helps teams identify where waste accumulates and where to improve flow.
Value stream mapping (VSM) is a Lean visualization that captures the entire path of delivering value, typically from demand/intake through analysis, build, test, and release. The map emphasizes where time is spent working versus waiting, making queues and delays between steps obvious (often the largest source of lead time). Teams use VSM to identify waste (e.g., waiting, handoffs, overprocessing) and target improvements that shorten lead time and smooth flow. A cumulative flow diagram can expose bottlenecks over time, but it doesn’t explicitly map the end-to-end process steps and handoffs the way VSM does.
It maps the full workflow across steps and roles and highlights delays (wait time) and queues between activities.
Topic: Leadership
A delivery team has been operating in a command-and-control environment where a functional manager assigns tasks and publicly calls out individuals when defects occur. You coached the manager to stop individual blame, and you helped the team establish working agreements and run blameless retrospectives focused on system improvements.
Which evidence best validates that these undermining behaviors are decreasing?
Best answer: D
What this tests: Leadership
Explanation: To validate progress in addressing blame and command-and-control, use evidence that reflects team safety and empowerment, not output volume or meeting activity. A short, anonymous pulse on psychological safety and autonomy shows whether people feel safe to speak up, learn from failures, and self-manage—key outcomes of the actions taken.
When reducing command-and-control and blame, the most meaningful validation is whether team members feel safe to raise issues, take ownership, and make local decisions without fear of punishment. Those behaviors enable real agility: transparency, experimentation, and continuous improvement.
An anonymous psychological safety/autonomy pulse (tracked over several iterations) provides direct evidence of the desired cultural shift because it measures the team’s lived experience of safety and empowerment, which are prerequisites for blameless learning and self-management. The key takeaway is to prefer outcome-oriented evidence tied to mindset and behaviors over activity measures or delivery volume.
Improving psychological safety and autonomy directly indicates reduced blame and command-and-control behaviors.
Topic: Mindset
A product team is asked to “improve the employee onboarding experience,” but stakeholders cannot agree on what the real pain points are. The team has little direct contact with new hires, and existing feedback is mostly anecdotal.
Which practice is the best match to reduce uncertainty before committing to a solution?
Best answer: D
What this tests: Mindset
Explanation: When the team is unsure what problem to solve, the fastest feedback comes from directly learning about users’ needs and pain points. Design thinking techniques (empathy, define, and ideation) help create a shared problem statement and success criteria before choosing features. This reduces the risk of building the wrong solution based on assumptions.
The core decision is whether uncertainty is primarily about the problem/needs or about the solution/market response. Here, stakeholders disagree on the pain points and evidence is anecdotal, so the biggest risk is misunderstanding users. Design thinking fits this stage because it uses direct user discovery and synthesis to clarify:
Lean startup is better once you have a clear hypothesis about a solution or value proposition and need to validate it quickly with experiments. The key takeaway: use design thinking to discover and frame the right problem; use lean startup to validate solution hypotheses.
The main uncertainty is the underlying user problem, so empathy interviews and problem framing are needed first.
Topic: Leadership
An agile team is starting work on a new customer-notification service. A similar service failed last year due to missed rate-limiting and logging requirements, causing a production incident.
Constraints:
The team is about to implement an approach that looks similar to last year’s failed design. What is the BEST next action for the agile practitioner?
Best answer: A
What this tests: Leadership
Explanation: When a similar initiative has already failed, the fastest way to avoid repeating the mistake is to reuse organizational knowledge assets such as postmortems and ADRs. A short, focused review (and quick validation with people involved previously) surfaces non-obvious constraints like rate limits and logging before the team invests more work. This protects flow/WIP and reduces rework risk while still supporting the near-term demo.
The core concept is leveraging organizational knowledge assets to reduce avoidable risk and rework. Here, the organization already has high-signal assets (incident postmortems and ADRs) that document what went wrong and which design decisions were made. A best next action is to timebox a targeted review of those artifacts and quickly validate assumptions with the people who owned the prior service, then translate findings into today’s work (e.g., acceptance criteria, nonfunctional requirements, and updates to the team’s working agreements/DoD). This fits the constraints: it is lightweight (supports WIP limits), prevents repeating a known failure (quality), and reduces the chance of late discovery that would jeopardize the demo. In contrast, actions that delay learning or create broad new process are mistimed for an imminent delivery goal.
It quickly leverages existing organizational knowledge assets to prevent repeating a known failure while protecting WIP and the 2-week demo.
Topic: Leadership
A cross-functional agile team’s last two iterations missed their sprint goals due to frequent mid-sprint work interruptions and unclear handoffs. Tension is rising between development and operations.
You are facilitating a 60-minute problem-solving session and need it to end with a clear, owned action plan the team can start next iteration. What should you do to best optimize the outcome within the time constraint?
Best answer: C
What this tests: Leadership
Explanation: A problem-solving session is successful when it produces actionable, team-owned next steps. Using a lightweight root-cause approach aligns the group on what’s actually driving interruptions and handoff issues, then converges on a small number of experiments. Adding owners, due dates, and a check point turns ideas into an executable action plan within 60 minutes.
To facilitate problem resolution and end with a clear action plan, optimize for alignment plus actionability. Start by framing the problem and desired outcome, then use a quick root-cause technique (for example, 5 Whys or a cause-and-effect brainstorm) to avoid solving the wrong problem. Converge with a fast prioritization method and translate the selected improvements into experiments that are small enough to run next iteration.
A practical flow in 60 minutes is:
This preserves flow and learning while producing concrete, accountable next steps rather than mandates or escalation.
It creates shared understanding of the problem and converts it into a small set of testable actions with clear ownership and follow-up.
Topic: Delivery
An agile team’s last two retrospectives identified recurring production defects caused by incomplete test data setup. The team piloted a new checklist for one iteration and defects dropped noticeably without increasing cycle time. However, two new team members joined this week, and the same setup step was skipped again.
As the agile practitioner, what is the BEST next action to help sustain the improvement while keeping delivery flow stable?
Best answer: D
What this tests: Delivery
Explanation: A successful experiment from a retrospective should be sustained by making it explicit in the team’s agreed way of working. Updating the working agreement and Definition of Done ensures the checklist is consistently applied, especially with new team members joining. This locks in the quality improvement while minimizing disruption to flow.
Continuous improvement isn’t complete when an experiment works once; it becomes sustainable when the team “bakes it in” to how work is done. Here, the checklist demonstrably reduced defects without harming cycle time, but the improvement regressed after onboarding new members—an indicator the practice is not yet part of the team’s shared system.
The best next action is to update the working agreement and/or Definition of Done so:
Accountability conversations or escalation may still happen later, but they don’t reliably preserve the improvement across time and team changes.
By institutionalizing the proven practice in explicit team agreements/DoD, the team preserves quality gains across membership changes without adding disruptive process overhead.
Topic: Delivery
A Kanban team has missed its iteration goal for three iterations. WIP keeps growing, and tension is rising between developers and testers.
Metrics from the last 4 weeks show:
What is the most likely underlying cause?
Best answer: B
What this tests: Delivery
Explanation: The widening “In Test” band indicates a bottleneck at testing, not development. The fact that most items arrive to testing at the end of the iteration shows batching and handoffs, which creates queues, delays feedback, and increases rework when acceptance criteria fail. Addressing the batching/flow policy is more causal than the symptoms it produces.
Use flow metrics to distinguish symptoms from causes. A widening cumulative-flow band in “In Test” means demand is arriving faster than testing can finish, so the system constraint is in testing. The additional clue that most stories enter testing in the last two days indicates late, batch handoffs (rather than continuous testing), which predictably creates queues, delays feedback, and increases returns when acceptance criteria fail.
A practical feedback-loop response is to reduce batching and manage flow across the whole team (e.g., pull smaller slices earlier into test, swarm on the constraint, and strengthen “ready/done” policies so acceptance criteria are met before/while coding). The key takeaway is to fix the upstream behavior creating the test queue, not to relabel the queue itself as the cause.
The flow metrics point to late, batched arrivals into testing that overwhelm the constraint and drive returns/rework.
Topic: Mindset
A developer repeatedly merges code without the agreed peer review, causing defects. The agile coach schedules a short 1:1 and says: “Yesterday afternoon (situation), you merged to main without a review (behavior). It increased rework and put the team under pressure (impact). What was happening, and how can we prevent this next time?”
Which technique best matches this way of giving feedback to preserve trust and encourage learning?
Best answer: A
What this tests: Mindset
Explanation: This is the Situation–Behavior–Impact (SBI) model: it anchors feedback in observable facts and consequences instead of judgments. By asking for the other person’s perspective and focusing on prevention, it supports psychological safety and turns a mistake into learning.
Feedback that preserves trust is specific, behavior-focused, and learning-oriented. The SBI model does this by separating observations from interpretations: describe the situation, name the observable behavior, and explain the impact on outcomes or people. Adding an open question (“What was happening?”) shifts the interaction from blame to shared understanding, which supports psychological safety and makes it easier to agree on an experiment (e.g., reinforcing the working agreement or improving the review workflow). The key is that the feedback targets changeable actions and invites collaboration, rather than attacking intent or character.
It uses specific context, observable behavior, and impact, then invites dialogue to learn and improve.
Topic: Product
Your team is preparing for an iteration review with customer representatives. You have 10 minutes to demonstrate work and get feedback. This iteration produced:
To maximize feedback and learning, what should you demonstrate?
Best answer: C
What this tests: Product
Explanation: To maximize learning in a review, demonstrate working product behavior that stakeholders can react to. A thin, end-to-end slice enables concrete feedback on usability and value, and it tests assumptions with minimal additional investment. Showing incomplete or non-user-visible work usually produces less actionable feedback.
The review is primarily a feedback and learning event, so the best demo choice is the increment that stakeholders can actually experience and evaluate. A thin, working end-to-end slice provides observable outcomes (what users can do, where they struggle, what they value), which drives specific feedback and better next-backlog decisions. In contrast, backend-only improvements and progress artifacts are harder for customers to validate and often shift the session into status reporting.
A practical rule is:
When time is limited, demo the smallest increment that produces the most actionable feedback.
A small, working slice lets users try it and provide actionable feedback on real behavior and assumptions.
Topic: Leadership
Three agile teams are building different services for the same customer portal. In the last release, teams duplicated work and implemented conflicting UI patterns because product and technical decisions stayed inside each team.
As the agile lead, which TWO actions best encourage cross-team sharing of technical and product knowledge? (Select TWO)
Correct answers: A, B
What this tests: Leadership
Explanation: Use lightweight, recurring cross-team touchpoints that make work and decisions visible and easy to discuss. Communities of practice enable peer-to-peer learning and shared standards without centralizing control. A joint iteration showcase creates a regular feedback loop for product direction and technical approaches across teams.
To encourage cross-team sharing, prefer pull-based, regular collaboration mechanisms that make knowledge visible and easy to exchange. Communities of practice (guilds) create a standing forum for practitioners to share patterns, review emerging standards, and spread learning without turning it into a top-down mandate. A joint iteration showcase (shared review/demo) is a practical “knowledge radiator” that exposes what each team built, why decisions were made, and what was learned—supporting both product alignment and technical coherence.
These approaches increase transparency and learning while preserving team autonomy. Centralizing approvals, creating separate “owners” of standards, or rewarding hero roles tends to reduce broad sharing and reinforces silos.
Communities of practice create a lightweight, recurring forum to share patterns, decisions, and learning across teams.
A shared showcase makes work and decisions transparent across teams and promotes product and technical alignment through feedback.
Topic: Leadership
An agile team is building an improved onboarding flow for a SaaS product. The stated purpose is to help new customers reach their “first successful setup” faster because many abandon the product during the first hour. Team morale is slipping because daily work feels like a list of tasks.
Which metric or evidence best validates that the team’s day-to-day progress is delivering the intended customer outcome?
Best answer: B
What this tests: Leadership
Explanation: To keep engagement high, the team needs evidence that links daily work to the purpose customers care about. A customer outcome metric like time-to-first-success validates whether changes are improving real user results, not just completing activity. This creates a clear line of sight from tasks to impact.
The strongest way to connect daily work to purpose is to inspect an outcome-oriented measure that reflects customer behavior or success in the journey you are trying to improve. In this scenario, the goal is faster “first successful setup,” so tracking time-to-first-success (and its trend versus baseline) validates whether the team’s increments are actually reducing friction for new customers.
Output/activity measures (tasks done, stories accepted, velocity) can be useful for planning and transparency, but they don’t validate customer value delivery. When engagement is slipping, emphasizing outcome evidence reinforces shared vision and shows how engineering work translates into customer success.
It directly measures whether onboarding changes improve the customer outcome the team is trying to achieve.
Topic: Delivery
A Kanban-based product team is trying to reduce waste and improve flow. Their cumulative flow diagram shows a growing queue in “Code Review,” and cycle time has become unpredictable. The team agrees to use continuous improvement to keep identifying and removing waste.
Which action should the team NOT take?
Best answer: D
What this tests: Delivery
Explanation: Continuous improvement focuses on repeatedly finding waste, running small experiments, and inspecting results. When a queue is growing, adding more work to “stay busy” usually increases waiting and context switching, making flow less predictable. The best moves are to visualize the system, limit work in process, and inspect outcomes to guide the next improvement.
Continuous improvement (Kaizen) in agile delivery means treating waste removal as an ongoing loop: make work visible, identify constraints/queues, try a small change, and inspect the impact with lightweight metrics. In the scenario, the growing “Code Review” queue is a bottleneck signal, so the team should reduce waiting and improve flow through experiments such as WIP limits and clearer policies.
Increasing WIP to keep everyone utilized is a common anti-pattern: it optimizes for local efficiency rather than end-to-end flow, which typically increases waiting time and cycle time variability. The key takeaway is to prioritize flow and learning over utilization.
Maximizing utilization by adding more WIP typically increases queues and delays, worsening flow waste.
Topic: Mindset
A product team releases a small onboarding improvement and collects beta-user feedback. Stakeholders agree the team is learning quickly, but they are unsure whether that learning is actually changing what gets built and how the team works.
Which evidence best validates that learning and feedback are being incorporated into both the product backlog and team practices?
Best answer: D
What this tests: Mindset
Explanation: The strongest validation is a visible, traceable connection from real feedback to concrete backlog updates and to explicit changes in how the team works. A backlog change log tied to customer feedback, along with updates to the definition of done, demonstrates that learning is driving prioritization and evolving team practices. This is stronger than measuring activity or collecting ideas without showing adoption.
To demonstrate that learning is being incorporated, look for evidence of closed feedback loops: insights lead to specific backlog changes (new items, revised acceptance criteria, reprioritization) and to explicit practice changes the team actually adopts (e.g., updates to working agreements or the definition of done). Artifacts that connect “what we learned” to “what we changed” validate an agile, inspect-and-adapt mindset.
Strong evidence typically includes:
Activity counts and planning completion can coexist with ignoring feedback; the key is traceability from learning to changes in both product and process.
It shows a traceable path from feedback to reordered backlog work and explicit practice changes (DoD).
Topic: Delivery
A Kanban team’s cycle time has doubled, and many items sit in “Blocked” waiting for environment access or reviews. In a retrospective, the team rates possible waste-reduction actions using a simple impact/effort check and agrees to start with High impact + Low effort items first.
| Action | Impact | Effort |
|---|---|---|
| Add a WIP limit and swarm to finish work before starting new | High | Low |
| Add a daily 10-minute blocker-busting huddle with DevOps and an explicit escalation policy | High | Low |
| Build a comprehensive automated end-to-end regression suite | High | High |
| Replace the deployment pipeline tool with a new platform | High | High |
| Create a detailed weekly status report template for stakeholders | Low | Medium |
| Split the team into analysis, dev, and test groups | Medium | Medium |
Which TWO actions should the team do first?
Correct answers: A, F
What this tests: Delivery
Explanation: When using an impact/effort approach to reduce waste, start with changes that create meaningful improvement with minimal cost and disruption. Here, the team already rated the options and agreed on the selection rule: High impact and Low effort. The two actions that meet that criterion are the best first steps.
Impact/effort prioritization is a practical way to remove waste by delivering benefits quickly while limiting risk and change overhead. In the scenario, the team’s main wastes are waiting (blocked items) and excess work-in-process (starting too much and finishing too little), and they’ve agreed to start with High-impact/Low-effort improvements.
Choosing the two “quick wins” helps the team:
Higher-effort initiatives can still be valuable, but they are typically sequenced after quick wins because they take longer to validate and delay near-term flow improvements.
It is explicitly rated High impact and Low effort, making it a clear first “quick win” to improve flow.
It directly targets waiting/blocked time and is rated High impact with Low effort, so it should be prioritized first.
Topic: Mindset
Six agile teams are building a single product with frequent cross-team dependencies. Teams are in three time zones, and recent integration defects were traced to “unknown” dependency handoffs. Leadership asks for evidence each week that inter-team coordination is improving and that the next integrated increment is becoming more release-ready.
Which metric/evidence/artifact best validates progress in this context?
Best answer: B
What this tests: Mindset
Explanation: With multiple teams and recurring integration issues, the strongest validation is evidence that dependencies are visible and moving toward completion. A cross-team dependency board makes handoffs explicit and highlights blocked or aging dependencies that threaten integration readiness. This directly supports frequent integration and reduces “unknown dependency” surprises.
When the coordination problem is missed handoffs across time zones, you need evidence that dependencies are being managed as a flow item, not just discussed. A shared dependency board (often with owners, expected dates, and blocked/aging signals) creates a single, continuously updated view of inter-team commitments and bottlenecks. That makes it easy to validate whether coordination is improving by checking if dependencies are being identified earlier, resolved sooner, and not aging into integration time.
The key is choosing evidence that reflects cross-team readiness for an integrated increment, rather than internal team activity or participation in a meeting/community.
It provides transparent, current evidence that cross-team dependencies are identified, flowing, and not stuck.
Topic: Mindset
A cross-functional agile team is delivering an internal platform. Stakeholders say they lack visibility and ask the team to produce a detailed weekly status deck. The team is already spending noticeable time compiling updates and wants to keep transparency high with minimal extra overhead.
Which TWO actions should the agile practitioner recommend? (Select TWO)
Correct answers: B, C
What this tests: Mindset
Explanation: Use lightweight, pull-based transparency mechanisms that are updated as part of doing the work. Making work visible on a shared board and using automated dashboards from existing tools provide timely visibility while avoiding manual status compilation. These approaches preserve team capacity and encourage stakeholder self-service.
The goal is high transparency with low overhead by making information available “by default” rather than creating separate reporting work. Information radiators (like a visible workflow board) let stakeholders see what’s in progress, what’s blocked, and what’s done at any time. Automated dashboards pull data from existing sources (board, CI, defect tracker) to provide objective signals about flow and quality without asking the team to manually repackage updates.
Practical characteristics of low-overhead transparency:
The key takeaway is to increase visibility by improving the system of work, not by adding reporting layers.
A visible, real-time board is an information radiator that increases transparency with near-zero reporting work.
Automating data capture and displaying trends provides transparency without manual status compilation.
Topic: Leadership
Mid-sprint, two senior developers are openly arguing in chat about whether to refactor a core component now or defer it. The disagreement is spreading: other teammates are taking sides, code reviews are stalled, and two stories are at risk of missing the sprint goal. You are the agile team facilitator.
What should you do next to best prevent team fragmentation and delivery delays?
Best answer: A
What this tests: Leadership
Explanation: The best optimization is to address the conflict immediately in a structured, timeboxed way so the team can regain alignment and restore flow toward the sprint goal. Early facilitation prevents side-taking, reduces rework, and turns disagreement into a decision and a clear plan. This directly reduces the risk of delays caused by stalled reviews and blocked collaboration.
When conflict begins to impact collaboration (side-taking, stalled reviews, blocked work), the facilitator should intervene promptly to prevent fragmentation and delays. The aim is not to “win” the argument, but to create a safe, structured conversation that reconnects people to the shared goal and produces a decision the team can act on.
A practical next step is to run an immediate, timeboxed alignment session to:
Delaying, escalating, or letting parallel approaches proceed typically increases queueing, integration risk, and social division—exactly what is already threatening delivery.
A quick, facilitated conversation channels the conflict into a shared decision and actionable plan before it spreads and blocks flow.
Topic: Delivery
An agile team has missed the last two iteration goals. WIP is growing, and about 40% of completed stories require rework after the review. Stakeholders and the Product Owner are in conflict: customer feedback and usage data show users struggle with onboarding and want it simplified, but the backlog remains dominated by a large “personalization” epic planned months ago.
What is the most likely underlying cause?
Best answer: C
What this tests: Delivery
Explanation: The strongest clue is that validated feedback and usage data point to a different customer need, yet the team continues to execute an old epic. When learning contradicts the current plan, the right move is to adjust priorities and potentially re-scope or pivot. Keeping the same plan forces repeated rework and undermines iteration goals.
In agile delivery, early feedback and usage data are signals to adapt product direction, not just refine implementation details. Here, customers are clearly indicating that onboarding improvements are the most valuable next step, but the backlog is still dominated by a months-old personalization epic. That mismatch typically creates thrash: work is completed, then reworked or deprioritized after reviews, goals are missed, and conflict rises because the team is executing a plan that no longer fits the evidence.
The underlying cause is a prioritization and product-direction problem: the team is not using feedback to re-order the backlog (and, if needed, re-scope or pivot) toward the highest current value.
Customer feedback indicates a different value path, but priorities remain anchored to an old plan, creating rework and missed goals.
Topic: Mindset
You are coaching a team that maintains an internal data platform. Stakeholders submit enhancement requests and production issues throughout the month. Leadership asks you to recommend an iteration-based approach (timeboxed planning and commitments) or a flow-based approach (pull system with WIP limits) for the next quarter.
Before recommending an approach, what should you verify first?
Best answer: D
What this tests: Mindset
Explanation: To decide between iteration-based and flow-based ways of working, first understand demand characteristics. High variability and frequent unplanned arrivals tend to favor a continuous-flow pull system, while more stable, batchable demand can support timeboxed iterations. Without data on arrival rate, urgency, and size variability, any recommendation is guesswork.
The key decision point for flow-based versus iteration-based approaches is the nature of demand and variability in the work system. If work arrives unpredictably (interrupt-driven issues, frequent priority changes, wide variation in size/urgency), a flow-based approach helps optimize throughput and responsiveness using pull policies and WIP limits. If demand is relatively stable and can be planned into coherent batches, iteration-based delivery can work well by enabling timeboxed planning, goal focus, and regular review/retro cycles.
A practical first check is to look at:
Other factors matter later, but demand variability is the first input to selecting the delivery cadence model.
Choosing flow vs iterations depends first on how predictable and variable demand is and whether work can be planned in batches or must be handled continuously.
Topic: Product
Mid-iteration, an agile team fears they will miss the iteration goal of delivering a usable “Checkout MVP” increment. They review the board excerpt below.
Exhibit: Iteration board (excerpt)
Goal: Checkout MVP (shippable)
PB-21: Validate promo code (Depends: Pricing API v2) Status: In Progress (Blocked)
PB-22: Show order summary (No dependency) Status: To Do
PB-23: Error handling + telemetry (No dependency) Status: To Do
Dependency note: Pricing API v2 from Team B ETA = next iteration
What is the best next action to mitigate the dependency risk and protect delivery of an increment this iteration?
Best answer: B
What this tests: Product
Explanation: The exhibit shows a critical story blocked by an external dependency that will not be available until the next iteration, directly threatening a shippable increment. The best mitigation is to actively manage the dependency by aligning with the other team on an interim interface/contract and using a stub to decouple work. This preserves the possibility of delivering an integrated, usable slice now while the upstream work continues.
A dependency that lands after the iteration (Pricing API v2 ETA = next iteration) is a clear delivery risk for the current increment, especially when the blocked item is part of the iteration goal. The most effective mitigation is to reduce or eliminate the dependency’s impact on the current iteration by coordinating quickly and creating a workable integration path.
A practical approach is:
Waiting preserves the blocker; simply starting more work increases work-in-progress without resolving the integration risk. The key takeaway is to proactively decouple and align across teams to protect a potentially shippable increment.
It reduces reliance on the delayed external deliverable while enabling a potentially shippable checkout slice this iteration.
Topic: Mindset
A new cross-functional agile team has been formed to build an internal customer portal. After the first iteration, discussions are tense and meetings are dominated by two senior engineers; quieter members rarely speak up. The product owner is worried about “team chemistry” and asks you, as the agile practitioner, to help the team collaborate more effectively.
Which action should you NOT take?
Best answer: C
What this tests: Mindset
Explanation: The situation is typical early team dynamics where the team needs safety, shared norms, and learning loops to improve collaboration. Support actions should increase transparency, equal participation, and the team’s ability to self-manage. Unilaterally assigning work and authority may quiet conflict short term but undermines self-organization and delays team development.
In early team formation, friction and uneven participation are common signals that the team needs facilitation and coaching—not external control. Effective support focuses on helping the team create shared norms, practice respectful conflict, and inspect-and-adapt their collaboration.
Privately assigning tasks and decision rights shifts ownership away from the team and reinforces a dependency on a coordinator. That may reduce visible disagreement temporarily, but it avoids the real issues (psychological safety, participation balance, decision-making clarity) and can worsen engagement from quieter members.
The key takeaway is to enable the team to develop its own agreements and feedback loops rather than managing around the conflict.
This bypasses self-organization and prevents the team from developing healthy norms and shared ownership.
Topic: Delivery
A cross-functional agile team is mid-iteration on a customer-facing release. Over the last two iterations, escaped defects increased, and the team has a working agreement to reserve about 15% of capacity for improvement work (automation and reducing tech debt).
Now a key stakeholder asks the team to “focus only on features” to meet a marketing date, while the team’s Kanban board is already at its WIP limit.
What is the BEST next action?
Best answer: C
What this tests: Delivery
Explanation: Protecting time for continuous improvement requires making improvement items visible and balancing them against delivery work. By agreeing on an explicit capacity allocation and trading scope with the product owner, the team can meet stakeholder needs without sacrificing quality. This also respects WIP limits and the team’s working agreement, preserving sustainable flow.
The core concept is balancing delivery and improvement by explicitly allocating capacity and making improvement work part of the same value-and-flow conversation as features. With rising escaped defects, stopping improvement work increases rework and risks slowing delivery overall. The best next action is to bring the constraint trade-off to the product owner (and stakeholder as needed): keep improvement work visible in the backlog/board, reserve a small, agreed capacity slice for it, and adjust feature scope to fit the marketing date.
A practical approach is:
This preserves flow and quality while still addressing the stakeholder’s date constraint.
Explicitly reserving and protecting improvement capacity while adjusting scope maintains quality and flow without breaking WIP or team agreements.
Topic: Leadership
Midway through iteration planning, the product owner introduces a newly urgent compliance requirement. Two developers become quiet, avoid eye contact, and stop contributing; another developer starts speaking sharply and blaming “last-minute changes.” As the agile practitioner facilitating the session, what is the best next step?
Best answer: B
What this tests: Leadership
Explanation: Emotional intelligence starts with noticing and naming emotional cues, then creating a safe moment for the team to express concerns. A short, facilitated check-in helps you adjust tone, clarify intent, and re-establish shared understanding before making planning decisions. This keeps the conversation productive and supports team empowerment rather than avoidance or escalation.
The core move is to recognize emotional signals in real time (withdrawal, blame, sharp tone) and respond with empathy and facilitation so the team can stay engaged. Pausing briefly to reflect what you observe (without judgment) and inviting each person to share concerns creates psychological safety and prevents unspoken frustration from driving poor planning decisions.
A practical sequence is:
Deferring the issue or forcing justification tends to increase defensiveness; escalating removes ownership from the team when facilitation can resolve it in the moment.
Naming observed emotions and facilitating a short check-in surfaces concerns so you can adapt your communication and proceed collaboratively.
Topic: Delivery
Your team uses a Kanban-style board and is running a continuous improvement cycle to reduce delivery time. Review this flow snapshot and choose the best next action.
Flow snapshot (last 2 weeks)
Column Avg items in column Avg wait time
In Dev 3 0.5 day
Code Review 12 4.0 days
Test 4 1.0 day
Notes: 30% of items are returned from Code Review for rework
Best answer: A
What this tests: Delivery
Explanation: The data shows the biggest delay is a queue in Code Review, plus significant rework coming out of that step. The most effective continuous-improvement action is to run an experiment that directly reduces the queue and restores flow. A WIP limit forces focus on finishing and exposes the system constraint so the team can remove waste iteratively.
Continuous improvement in delivery focuses on identifying the largest sources of waste (for example, waiting, queues, and rework) and running small experiments to reduce them. In the snapshot, Code Review has disproportionately high WIP (12 items) and the longest wait time (4.0 days), indicating a bottleneck and significant waiting waste. The 30% return rate also suggests quality issues or unclear review policies causing rework.
A practical next experiment is to limit WIP into Code Review and shift team behavior from “start more” to “finish work,” then inspect whether lead time and rework improve. If needed, the team can follow up with tighter review policies, pairing, or automation as subsequent iterations.
Optimizing utilization or batching typically increases queues and makes the bottleneck worse.
The exhibit shows a review bottleneck (high queue and wait time), so limiting WIP and swarming reduces waiting waste and improves flow.
Topic: Product
A product team is refining the backlog with two stakeholder groups. The team has a working agreement to keep the top of the backlog dependency-free so they can start immediately.
Three items are candidates for the next iteration:
As the agile practitioner, what is the BEST next action to prioritize these items with stakeholders?
Best answer: C
What this tests: Product
Explanation: The best move is to align stakeholders on a transparent ordering using value, risk, and dependencies as explicit criteria. Putting a dependency-blocked item at the top violates the team’s flow agreement and is likely to cause thrashing. Ordering the security risk reduction and the dependency-free high-value work ahead enables immediate start while managing risk.
Backlog ordering is a collaborative decision that balances value, risk reduction, and dependencies while preserving flow. Here, the team explicitly wants the top of the backlog to be dependency-free, so an item blocked by an unscheduled vendor change should not be placed at the top even if its value is high. A facilitated discussion (e.g., lightweight relative ranking) helps stakeholders see the trade-offs and agree on sequencing: address meaningful risk (the security item) and deliver dependency-free value while planning the dependency item for when the vendor change is confirmed. Uncertainty can justify earlier discovery, but it should be weighed alongside risk exposure and delivery constraints rather than used as a single deciding rule.
It makes trade-offs explicit, reduces risk early, and respects the team agreement by not putting a blocked dependency at the top.
Topic: Mindset
You are supporting an agile team that is getting frequent requests for “more status.” The team already maintains the following:
Team radiators (updated daily)
- Kanban board: WIP 7 (limit 8); blockers flagged
- Sprint goal: Enable SSO for admin portal
- Release burn-up: 62/80 points done
New stakeholder requests this week
- Ops manager: weekly 2-page status report
- PMO: weekly slide deck + RAID log
- Sales: mid-sprint progress email
What is the best next action to keep transparency high without creating excessive reporting overhead?
Best answer: B
What this tests: Mindset
Explanation: The exhibit shows strong existing transparency via updated radiators, but stakeholders are requesting overlapping push reports. The best response is to streamline to a single, shared, pull-based view (and an agreed cadence) that meets information needs with minimal extra work. This keeps transparency high while reducing duplication and context switching.
High transparency in agile is achieved through visible, frequently updated artifacts that stakeholders can use to self-serve. The exhibit already provides a board with blockers, a clear sprint goal, and a release burn-up—yet multiple stakeholders are asking for different status formats, which creates waste and distracts the team.
A good next step is to facilitate an alignment on:
This keeps the information flow consistent and reduces redundant, push-based reporting.
It preserves transparency by creating a single, pull-based information radiator instead of multiple redundant reports.
Topic: Leadership
Mid-iteration, the development team reports an “urgent blocker” and asks you (the agile lead) to “just get it fixed.” A senior stakeholder is pressuring for a same-day resolution, but you have little detail beyond “we can’t proceed.”
What is the best question to ask first before taking action?
Best answer: B
What this tests: Leadership
Explanation: Start by clarifying what is actually blocked and what the team has already attempted, so the next step is targeted and supports team learning. This keeps the team engaged in problem-solving rather than having the agile lead bypass ownership by immediately escalating or assigning solutions. Once the impediment is understood, you can facilitate swarming, escalation, or scope decisions as appropriate.
When an urgent blocker is reported with insufficient detail, the agile leader’s first move is facilitation: make the impediment explicit and help the team inspect what’s happening. Asking what the team has tried and what help they need preserves team ownership and creates a learning loop (so the team improves its ability to remove similar blockers in the future).
A good first question:
Escalation, reassignment, or scope changes can be valid later, but acting before understanding the impediment risks solving the wrong problem and weakening the team’s accountability.
It clarifies the impediment while keeping ownership with the team and enabling a focused, collaborative next step.
Topic: Delivery
After several escaped defects, an agile team agreed in the retrospective to prevent incomplete work from being marked done. They decided to add peer review and automated regression tests as explicit quality steps and want to sustain this improvement, not just try it once.
Which metric/evidence/artifact best validates that the improvement is being sustained in delivery?
Best answer: D
What this tests: Delivery
Explanation: Sustained improvement is validated when the team updates a durable policy (such as the definition of done) and consistently uses it to control what can be called “Done.” Compliance evidence at the work-item level demonstrates the team is actually applying the change in day-to-day delivery, not merely discussing or training on it.
To sustain a continuous improvement, the team should make the new behavior “the way we work” by updating a persistent agreement (working agreements, team policies, or definition of done) and then using it as a gate for completion. The strongest validation is not that the team talked about the change or completed activities, but that delivery evidence shows the policy is being followed repeatedly.
Good validation typically includes:
This directly ties the improvement to routine execution and provides inspectable proof over time.
A definition of done update plus evidence that completed items consistently met it best shows the improvement is embedded and sustained.
Topic: Delivery
Mid-iteration, the team reports a blocker: the shared test environment is down, so they cannot run the final acceptance tests for a high-priority story. A stakeholder asks the agile practitioner to “just mark it done and we’ll test after release.”
Before the team selects a course of action, what should the agile practitioner ask/verify first?
Best answer: C
What this tests: Delivery
Explanation: Start by clarifying what “done” means for the blocked work—specifically which acceptance criteria and Definition of Done items depend on the test environment. Without that information, the team cannot responsibly choose between options like swarming on a workaround, splitting the story, or reordering work. This keeps decisions grounded in quality and shared agreements rather than pressure.
When a blocker appears, the team’s decision should be anchored to the delivery agreements that define releasable value. Verifying which acceptance criteria and Definition of Done checks are currently impossible makes the constraint explicit and prevents premature decisions (like declaring work complete without required validation).
Once the unmet criteria are clear, the team can collaboratively evaluate practical options such as:
The key is to clarify required completion conditions before debating approvals, overtime, or external promises.
Clarifying the unmet DoD/acceptance criteria and constraints establishes what “done” requires before choosing any workaround or resequencing.
Topic: Delivery
A product team uses a Kanban board. Over the last two weeks, average cycle time increased from 6 to 11 days and defects are being found late. Today the board shows 12 items in progress across Development and Testing, with several blocked by reviews. The team’s working agreement says “finish work before starting new work,” but stakeholders are pushing to start two more high-value items immediately.
What is the BEST next action for the agile practitioner to improve flow?
Best answer: B
What this tests: Delivery
Explanation: The core issue is too many items in progress, which creates queues, delays feedback, and pushes testing/quality checks late. The best next step is to make WIP limits explicit and enforce a pull approach so the team stops starting and starts finishing. Swarming to complete and unblock the most valuable work quickly restores flow and creates a credible throughput signal for stakeholder conversations.
WIP limits are a primary lever for improving flow because they reduce queue length, multitasking, and hidden handoffs that inflate cycle time. In this scenario, the team is violating its own “finish before start” agreement, and the large amount of in-progress and blocked work is a clear flow constraint.
A strong next step is to:
Starting more work (even if “high value”) typically increases cycle time and delays feedback; finishing creates capacity and improves predictability.
Limiting WIP and focusing on finishing reduces multitasking and queues, improving flow and quality while providing a transparent basis to negotiate new starts.
Topic: Leadership
A cross-functional team has had the same conflict in three consecutive iterations: developers consider a story “done” when code is merged, while testers consider it “done” only after exploratory testing and release notes are completed. The disagreement keeps resurfacing in daily coordination and is creating handoff delays.
As the agile practitioner, you want to reduce this recurring conflict by establishing working agreements. What should you verify or ask first before deciding what agreement to introduce?
Best answer: D
What this tests: Leadership
Explanation: Working agreements are most effective when they target a clearly identified, recurring friction point. Here, the recurring conflict is about completion criteria and handoffs, so the first step is to uncover the team’s current Definition of Done and the specific mismatched expectations between roles. With that clarity, the team can co-create an explicit agreement that prevents the same argument from repeating.
To reduce recurring conflict sources, first make the conflict “observable” by clarifying what people believe is true today. In this scenario, the team is repeatedly arguing about when work is complete, which is a classic signal of an unclear or inconsistently applied Definition of Done (and/or missing working agreements about testing, documentation, and handoffs).
Start by asking the team to describe their current “done” criteria and where the mismatch shows up (e.g., merge vs. validated vs. documented). Then facilitate the team to create or refine a shared DoD/working agreement that makes expectations explicit and actionable (including who does what, when, and what evidence is required). The key is to diagnose the specific recurring trigger before jumping to scope changes, blame, or tooling.
Clarifying the current shared (or conflicting) Definition of Done surfaces the specific ambiguity a working agreement should resolve.
Topic: Mindset
A product team is exploring a new machine-learning feature. Uncertainty is high: they do not know whether users will trust the recommendations or whether the model can meet latency needs. The product owner wants evidence within two weeks before committing more budget.
Which strategy is the best safe-to-fail experiment?
Best answer: D
What this tests: Mindset
Explanation: When uncertainty is high, the best approach is a safe-to-fail experiment that is small, reversible, and timeboxed. Building a thin prototype and testing the riskiest assumptions with real users creates fast learning while limiting cost and impact. The two-week constraint reinforces the need for an evidence-driven, low-commitment probe.
A safe-to-fail experiment is designed to maximize learning while minimizing the cost of being wrong. In high uncertainty, prioritize validating the riskiest assumptions first (desirability, feasibility, viability) using the smallest test that can produce credible evidence. Timeboxing keeps the investment controlled, and using real users (or realistic conditions) improves signal quality.
A strong safe-to-fail experiment typically:
The key takeaway is to learn early with constrained bets rather than committing to irreversible delivery work before evidence exists.
A timeboxed prototype with user validation limits downside while quickly generating evidence to decide whether to pivot or proceed.
Topic: Product
Which term describes a decision to stop further investment in a low-value product approach (based on learning) and redirect effort to a different approach to better achieve the desired outcome?
Best answer: C
What this tests: Product
Explanation: This situation calls for a pivot: using feedback and evidence to deliberately change direction rather than continuing to fund work that is not producing value. The focus is on redirecting effort to a different approach to improve outcomes, not merely limiting time or work in progress.
In agile product management, the core decision when evidence shows an approach is not delivering sufficient value is to pivot: intentionally change the strategy, feature approach, or target solution based on validated learning. Pivoting helps avoid the sunk-cost trap by treating prior investment as learning and reallocating capacity toward options with better expected outcomes. It is a value-delivery decision driven by feedback, experiments, and outcome measures (e.g., adoption, cycle time impact, customer satisfaction).
A timebox constrains how long you try something, an MVP defines the smallest slice to learn, and a WIP limit controls flow, but none of those terms specifically describe the decision to change direction because the current work is low value.
A pivot is an intentional change in direction based on validated learning to pursue higher value.
Topic: Delivery
A product manager asks an agile team to “share metrics with leadership weekly.” The team is worried leaders will compare teams and use the numbers for performance evaluation. You want to keep transparency while preventing the metrics from being weaponized.
Which agile practice best matches this situation?
Best answer: B
What this tests: Delivery
Explanation: Use transparent, visible metrics as an information radiator, but frame them with context and trends so they drive learning and improvement. Team-owned metrics reviewed in conversation help leadership interpret signals correctly and reduce misuse. This preserves transparency without turning numbers into targets.
The core concept is to radiate information while keeping metrics in service of inspection, adaptation, and continuous improvement—not as a tool for judging people or comparing teams. An information radiator (physical or digital) makes key flow and outcome signals visible, but it should emphasize trends over time, include context (e.g., work type changes, incidents, discovery), and be paired with regular conversations about what the team will try next. This keeps transparency high and shifts the focus from “who is better” to “what are we learning and improving.” A key takeaway is that hiding metrics reduces trust, while decontextualized or individualized metrics invites gaming and weaponization.
Radiating shared, contextualized trends supports transparency and reinforces using metrics for learning rather than comparison.
Topic: Delivery
A team using a Kanban board reports that average cycle time has doubled over the last two weeks. A manager asks you to “speed things up” by pushing more work into development to raise throughput.
Before recommending any change, what should you verify or observe first?
Best answer: D
What this tests: Delivery
Explanation: A metric shift like longer cycle time is only actionable when paired with flow context. Observing where WIP is building up and which items are blocked helps identify the actual bottleneck (e.g., review/QA/approval) before changing policies or pushing more work. This avoids worsening the system by increasing WIP in the wrong place.
Agile metrics describe outcomes, but they don’t explain causes by themselves. When cycle time increases, the first step is to look at the flow system for context—where WIP is accumulating and whether work is blocked—so you can locate the constraint driving the delay. Pushing more work into development without this check often increases WIP, creates longer queues, and further degrades cycle time.
A practical first check is:
Once you know the constraint and blocker causes, you can choose an improvement that targets that stage rather than adding more work-in-progress.
You need flow context (WIP buildup and blockers) to interpret the metric and target the actual constraint.
Topic: Delivery
A product team delivers small user stories weekly, but work frequently stalls after development is “done.” The team hands stories to a separate QA group and then to a security reviewer; the average cycle time is 12 days, with about 7 days spent waiting in queues. Stakeholders want faster throughput, but quality and security checks cannot be reduced, and the team is already exceeding its WIP agreement.
What is the BEST next action to increase throughput?
Best answer: A
What this tests: Delivery
Explanation: The biggest delay is queue time created by handoffs to specialized groups. The best next action is to bring the needed skills into the work as it happens (swarming/pairing) and redefine “done” to include those checks. This reduces dependencies while still meeting quality and security constraints and reinforcing WIP discipline.
When most cycle time is waiting, throughput improves fastest by reducing handoffs and dependencies, not by starting more work. In this scenario, stories leave the team and sit in QA and security queues, creating delays and additional coordination overhead. A practical next step is to collaborate with QA and security to complete stories in a single, end-to-end flow (for example, swarming or pairing on the highest-value items) and to update the Definition of Done so testing and required security activities are completed before a story is considered finished.
This approach:
The key takeaway is to optimize for finishing work with fewer transitions, not maximizing utilization of one function.
It reduces handoffs and external queues by completing work with the needed skills in one flow while maintaining quality and honoring WIP limits.
Topic: Leadership
A product team and a compliance group repeatedly clash during sprint reviews. Stories are often marked “done” by the team but later rejected by compliance, triggering escalations and blame. You facilitated a session to clarify decision rights, align incentives around shared outcomes, and update the Definition of Done.
Which metric/evidence would BEST validate that the conflict is being resolved at a systemic level?
Best answer: B
What this tests: Leadership
Explanation: Systemic conflict tied to role clarity and incentives should improve the work system, not just meeting attendance. A sustained drop in compliance-driven rework and shorter time from “done” to accepted indicates clearer ownership, better upstream alignment, and smoother flow across the boundary. That evidence connects directly to outcomes the groups were fighting about.
When conflict repeats at handoffs (e.g., “done” vs. “accepted”), it often signals a system problem: unclear decision rights, misaligned incentives, or mismatched working agreements (DoD/acceptance expectations). The best validation is outcome evidence that the system is improving across the boundary, such as reduced rework and faster acceptance.
Look for trends that demonstrate:
Activity counts or newly created documents can support the change, but they do not validate that the underlying incentives and responsibilities are now producing better delivery outcomes.
It shows whether clarified roles and aligned incentives are reducing friction, rework, and handoff delays.
Topic: Product
An agile team is starting iteration planning for a customer-facing release. Two stakeholders are pushing competing backlog items: one wants new onboarding features, the other wants performance improvements. The product owner asks you to help the team set an iteration goal that will guide tradeoffs if they cannot finish everything.
What should you verify or ask first before proposing an iteration goal?
Best answer: C
What this tests: Product
Explanation: A goal that guides tradeoffs needs a shared definition of what “success” looks like for the iteration or release. Clarifying the intended customer/business outcome and the acceptance expectations creates a decision rule for prioritizing and de-scoping work. Capacity and politics matter later, but they don’t define the goal itself.
A clear iteration or release goal is an outcome-focused statement that helps the team decide what to do when scope, time, or capacity conflicts arise. In an underspecified situation with competing backlog items, the first thing to clarify is the desired user/business outcome and the criteria for accepting that outcome (what “done and valuable” means for this timebox). Once that is explicit, the team can evaluate onboarding vs. performance work by asking which items most directly achieve the goal and meet acceptance expectations, and then make tradeoffs transparently.
The key takeaway is to clarify value and acceptance before discussing capacity, authority, or tactics.
An iteration goal must be anchored in the desired outcome and clear acceptance to guide tradeoffs among competing work.
Topic: Leadership
An agile team delivering an internal API has missed its iteration goal for three iterations. Their board shows WIP climbing from 8 to 15 items, several stories stay blocked for days, and the same integration defect has been reopened twice (rework). Tension between Development and Operations is rising, and daily standups often end with “we’re waiting for the Scrum Master to talk to Ops.”
The team already has a working agreement with Ops (shared channel + swarm protocol), but when urgent blockers occur the Scrum Master bypasses it, calls an Ops director, negotiates an ad-hoc fix, and then assigns tasks to the team.
What is the most likely underlying cause?
Best answer: A
What this tests: Leadership
Explanation: The strongest clue is that the team already has an agreed path with Ops, yet urgent blockers are handled outside the team and then handed down as instructions. This pattern creates dependency (“waiting for the Scrum Master”) and reduces shared ownership and root-cause learning, which in turn contributes to repeated rework, growing WIP, and conflict.
The underlying issue is not simply that blockers exist; it’s how they are being handled. When a leader repeatedly bypasses team agreements and takes over impediment resolution, the team learns to wait instead of swarming, collaborating with partners, and capturing learning (e.g., causes, prevention, and working agreement updates). That “hero” pattern centralizes decisions, reduces transparency, and reinforces conflict because the people doing the work aren’t jointly solving the problem with Ops.
A better stance is to facilitate rapid, team-owned resolution:
Removing impediments should increase team capability, not replace it.
By bypassing the agreed flow and directing fixes, the Scrum Master prevents team-owned problem solving and reinforces waiting behavior.
Topic: Product
A product team has missed its last three iteration goals. Their board shows many items stuck in “Dev done, waiting for test,” and WIP is steadily increasing even though the team is following posted WIP limits. Rework is common because integration issues and missing acceptance details are found late, and developers and testers are blaming each other.
In backlog refinement, the PO brings items like “Build reporting module,” and the team breaks them into UI/API/DB pieces that move between specialists.
What is the most likely underlying cause?
Best answer: C
What this tests: Product
Explanation: The team is decomposing backlog items into component work that must traverse multiple specialists before any integrated, testable value exists. That structure amplifies handoffs and waiting, so WIP piles up in downstream states and defects/acceptance gaps are discovered late. The persistent missed goals and rework point to poor slicing rather than a lack of effort.
The core issue is how the backlog is being refined and split. When work is sliced by technical layers (UI/API/DB), each “piece” is not independently valuable or releasable, so items accumulate in queues between specialists and feedback arrives late (after integration). That delay increases rework and drives role-based conflict because downstream functions (like testing) become a bottleneck.
Better splitting targets thin slices that can flow across the workflow and produce a small, testable outcome:
If limits and a reasonable done policy exist but flow still stalls, the backlog item structure is often the root cause.
Component-based slicing creates handoffs and queues, delaying feedback and increasing late rework and WIP.
Topic: Product
Mid-iteration, an agile team is building “Advanced Export,” a feature requested months ago. After releasing a thin slice to a pilot group, analytics show near-zero usage and support reports say users prefer the existing export.
Constraints:
What is the BEST next action?
Best answer: D
What this tests: Product
Explanation: The data indicates the export feature is low value, while the compliance story has a time-critical need. The best move is to stop investing in the low-value work and immediately redirect effort by updating the plan with the product owner. This also reduces WIP, improving flow and helping quality recover.
Value delivery is about maximizing outcomes, not finishing work simply because it was started. When evidence shows a feature is unlikely to deliver meaningful value, continuing is waste and often crowds out higher-value items—especially when WIP is constrained and quality is declining.
In this situation, the agile next step is to partner with the product owner to:
The key takeaway is to treat work as optional until it proves value, and to redirect quickly when the cost of delay for other items is higher.
It avoids sunk-cost bias by canceling low-value work, freeing WIP and capacity to focus on the higher-value compliance item while improving flow and quality.
Topic: Leadership
In a team meeting, the facilitator asks everyone to show a hand signal from 0–5 to indicate confidence in a proposed approach. Several people show 1–2 and avoid eye contact, so the facilitator pauses to invite concerns before proceeding.
Which agile facilitation technique is being used?
Best answer: A
What this tests: Leadership
Explanation: The scenario describes a structured consensus check using non-verbal hand signals to reveal hesitation and enable a safe pause. This is Fist of Five, where the facilitator uses low scores and body-language cues to surface risks, misunderstandings, or objections before committing to a decision.
Fist of Five is a quick facilitation technique for checking alignment and confidence using a visible, non-verbal 0–5 vote (often 0 = blocking concerns, 5 = strong support). The key facilitation response is to treat low numbers and other hesitation cues (e.g., avoiding eye contact, closed posture) as signals to slow down and ask what would increase confidence.
Typical facilitator moves:
This keeps decision-making transparent while encouraging quieter voices to be heard without putting them on the spot first.
It uses a non-verbal 0–5 hand signal to quickly gauge consensus and prompts discussion from low scores.
Topic: Delivery
A product team is preparing a customer release in six weeks. Stakeholders want frequent visibility into whether the release is truly on track, but the team previously experienced metrics being used to compare individual developers, which reduced collaboration.
Which metric/evidence should the agile practitioner radiate to best validate progress and readiness while minimizing the risk of metric weaponization?
Best answer: C
What this tests: Delivery
Explanation: Radiate a metric that reflects validated outcomes, not activity, and that is meaningful at the team/system level. A release burnup anchored to accepted (Done) work shows real progress toward the release and naturally incorporates scope change. Because it is team-level and trend-based, it reduces incentives to game or use it to rank individuals.
To validate progress and readiness in an agile context, favor evidence that represents completed, verified outcomes (e.g., accepted work meeting the Definition of Done) and communicate it as a trend with context. This supports transparency while discouraging gaming and blame.
A release burnup chart uses only accepted (Done) items, making it a stronger signal than “busy” indicators like hours or items started. It also makes scope change visible, helping stakeholders understand trade-offs without turning the metric into a proxy for individual performance. The key is to radiate it as a shared information radiator for decision-making, not as a tool for ranking people.
Activity and utilization measures are easy to inflate and often drive the wrong behaviors.
A release burnup based on accepted work provides outcome-based, team-level progress toward the release and is less likely to be weaponized against individuals.
Topic: Delivery
Midway through an iteration, your team demos a working slice of a new “bulk upload” feature to 12 pilot users. Feedback shows 9 users would not use it unless it supports spreadsheet templates and validation rules; the current backlog item does not include those needs.
Constraints:
What is the BEST next action?
Best answer: C
What this tests: Delivery
Explanation: Early feedback indicates the current implementation is unlikely to deliver usable value, so the team should pivot by adjusting scope and priorities. Splitting the work into thin, testable slices lets the team incorporate the highest-value learning next while keeping WIP limits and the Definition of Done intact. This preserves flow and quality and makes the next increment more likely to be adopted.
When users say they won’t use the feature without specific capabilities, that’s a strong signal to pivot or re-scope rather than simply “finishing” the original plan. The best response is to reshape the backlog into smaller slices that reflect the validated needs (templates/validation), then reorder to deliver the most valuable, testable slice next.
A practical sequence is:
This approach respects the team’s WIP limit and Definition of Done while aligning delivery with real user value; trade show pressure is managed by resetting expectations based on evidence instead of pushing low-value scope through.
It uses the feedback to adjust priorities and scope while protecting flow (WIP) and quality (DoD) by deferring unfinished work to the next plan.
Topic: Delivery
A product team using a Kanban board has complained that work is “stuck” and releases are slipping. Over the last month, average cycle time increased from 6 to 11 days while the number of items in progress stayed high (typically 15–18). The team also reports frequent waiting on reviews and handoffs, but defect rates have not increased.
Which improvement should the agile practitioner implement next for the best impact?
Best answer: D
What this tests: Delivery
Explanation: The metrics point to a flow constraint: cycle time is worsening while WIP remains high and work is waiting on handoffs. The most direct next improvement is to limit work in progress and make pulling/finishing work explicit, which reduces context switching and exposes bottlenecks for focused resolution.
When cycle time rises and WIP stays high, the system is overloaded: too much work is started relative to the team’s capacity to finish, and waiting states (reviews/handoffs) amplify delays. A lean/agile continuous improvement with the fastest expected impact is to implement explicit WIP limits per workflow state and a clear pull policy (and then swarm to finish) so the team stops starting and starts finishing. This quickly improves flow by reducing multitasking, shortening queues, and making the true constraint visible (often review capacity), enabling targeted follow-up improvements. Changes focused primarily on estimation cadence or end-of-process testing don’t directly address the observed queuing and waiting that are driving cycle time.
High WIP with rising cycle time signals a flow problem best addressed by limiting WIP and pulling work to completion.
Topic: Mindset
A cross-functional team says retrospectives feel “unsafe,” and a few people dominate the conversation. You’re facilitating the next retrospective and want broad dialogue and shared understanding.
Exhibit: Retro notes (last session)
Start: “Let’s hear from others too.”
Stop: “Interrupting”
Stop: “Debating solutions before agreeing on the problem”
More: “Quiet time to think before speaking”
More: “Clarify what we mean by ‘done’ ”
What is the best facilitation approach for the next retrospective?
Best answer: A
What this tests: Mindset
Explanation: The exhibit shows people need space to think, fewer interruptions, and agreement on the problem before jumping to solutions. A structured facilitation technique that starts with individual reflection and then progressively broadens discussion helps quieter voices contribute safely. This increases dialogue quality and leads to shared understanding the team can act on.
This situation calls for facilitation that reduces social risk and creates equal airtime so the team can align on what the real problem is. Techniques that begin with silent individual thinking and then move through small-group sharing to whole-group synthesis encourage participation from quieter members, prevent immediate debate, and help the group converge on common themes.
A practical flow is:
Compared with quick voting or having a single person decide, this approach prioritizes dialogue and shared understanding as the foundation for improvement.
It creates psychological safety through individual reflection and structured, inclusive dialogue that builds shared understanding before solutions.
Topic: Mindset
Midway through a 2-week iteration, the team has three stories in progress. One story is 90% complete and still supports the current release objective. Two other stories are only started.
Yesterday, customer feedback indicated one of the started stories is no longer valuable, and a new compliance requirement was discovered that could block release if not addressed soon.
Which TWO actions should the agile practitioner recommend? (Select TWO)
Correct answers: B, C
What this tests: Mindset
Explanation: When value and risk change, the team should quickly re-plan rather than blindly follow the original plan. At the same time, finishing a nearly done item that still aligns to goals prevents throwing away invested effort and delivers value sooner. Reassessing WIP and the backlog with the product owner enables an intentional pivot toward the compliance risk.
The core decision is balancing economic waste from abandoning WIP against the cost of delay and risk exposure from continuing the wrong work. In agile, new information is a trigger to adapt plans, but adaptation doesn’t mean stopping everything.
A practical response is:
The key is making WIP decisions using current value and risk, not the original iteration plan.
Finishing a nearly done, still-valuable item avoids waste and realizes value quickly.
A rapid value-and-risk review enables stopping low-value WIP and prioritizing compliance risk early.
Topic: Leadership
An agile coach joins a product team that is starting a new internal developer platform. Stakeholders disagree: Operations wants fewer incidents, Security wants stronger controls, and Application teams want faster deployments. The sponsor asks you to “write a purpose and vision statement” today so everyone can align.
What should you ask or verify FIRST before drafting the statement?
Best answer: B
What this tests: Leadership
Explanation: To align stakeholders, a purpose and vision statement should describe who the product serves, the problem it addresses, and the outcomes it seeks to achieve. Clarifying the primary user(s) and success measures creates a shared “north star” that can reconcile competing stakeholder preferences. Only then can solution choices and priorities be evaluated against that direction.
A shared purpose and vision is an alignment tool, so it should be grounded in customer/user value and the outcomes the organization expects. In this scenario, stakeholders are proposing competing solution preferences (reliability, controls, speed) without agreement on the product’s primary beneficiaries and what success looks like. Asking first about the key users, their pain points, and the measurable outcomes enables you to craft a concise statement that unifies stakeholders around intent rather than features.
A practical first clarification is:
Once that is clear, constraints, scope, and delivery decisions can be aligned to the vision instead of driving it.
A clear purpose and vision must anchor on who it serves and the intended outcomes so stakeholders can align on a shared direction.
Topic: Leadership
Stakeholders say they are surprised by changes late in development because the team mostly reports progress as “percent complete” in email. As the agile lead, you want a practice that reinforces transparency, enables frequent inspection, and supports rapid adaptation of priorities.
Which is the BEST match?
Best answer: D
What this tests: Leadership
Explanation: Frequent reviews/demos of a working increment create shared visibility into real progress and outcomes, not proxy reporting. They provide a natural cadence for stakeholders to inspect what was built and for the team to adapt plans by reordering and refining the backlog based on feedback. This directly reinforces transparency, inspection, and adaptation as a closed feedback loop.
To reinforce transparency, inspection, and adaptation, use a mechanism that shows actual product behavior and immediately turns learning into changed plans. A review/demo of a working increment is an information-rich event: stakeholders can see what’s truly done, ask questions, and validate assumptions. The team then adapts by updating the backlog (priority, scope, acceptance criteria) and upcoming iteration goals based on what was learned.
Key signals of the right practice in this scenario:
Metrics and internal syncs can support the loop, but they don’t replace stakeholder inspection of real increments.
Showing working product regularly creates transparent evidence for inspection and enables immediate backlog adaptation based on stakeholder feedback.
Topic: Leadership
Mid-sprint, two developers argue loudly in front of the team about a refactoring approach. One feels their concerns are being dismissed; the other says the debate is “wasting time” and wants to move on. The conflict is slowing work and making others disengage. As the agile practitioner, what should you do first to de-escalate and keep collaboration productive?
Best answer: C
What this tests: Leadership
Explanation: The best first move is to use empathy to reduce tension before trying to solve the technical disagreement. A brief pause, acknowledgment of emotions, and structured listening helps both people feel heard and lowers defensiveness. That restores a safe, collaborative space so the team can make a decision without further disruption.
When tension is rising, optimizing for flow and decision quality requires restoring psychological safety first. An empathic facilitation step (acknowledging what you observe, inviting each person to share concerns, reflecting back what you heard, and re-centering on the sprint goal) de-escalates the emotional charge so the team can return to productive collaboration.
A practical sequence is:
Jumping straight to decisions or deferring the issue keeps the underlying frustration intact and often increases conflict later.
Empathy and active listening reduce defensiveness quickly, restoring psychological safety so the team can collaborate on a decision.
Topic: Mindset
A delivery team uses a simple Kanban board, but stakeholders report that requests “sit for days” and the team frequently starts new work before finishing older items. In the last review, no one could quickly explain where work is getting stuck or what to address first.
Which TWO visuals should the agile practitioner introduce to improve transparency for timely decisions? (Select TWO)
Correct answers: A, D
What this tests: Mindset
Explanation: To support timely decisions, the team needs visibility into where work is accumulating and which items are stalled. Visualizing blocked work (with aging and reasons) enables immediate action to remove impediments. Visualizing flow with a cumulative flow diagram highlights bottlenecks and imbalance across workflow states so the team can manage WIP and improve throughput.
Transparency for fast decisions comes from making the work system visible: what items exist, where they are in the workflow, how much WIP is in each state, and what is blocked. A cumulative flow diagram (CFD) reveals flow over time by showing WIP buildup by state, making bottlenecks and widening queues obvious. Adding explicit blocked markers with reason and blocked age prevents “hidden” waiting, helps swarm on impediments, and supports policies like not pulling more work while blocked items accumulate.
These visuals focus attention on managing flow (reduce WIP, unblock, balance capacity) rather than reporting progress or utilization.
Making blocked work explicit (and aging) enables fast unblocking decisions and prevents hidden queues.
A cumulative flow diagram visualizes flow and bottlenecks by showing where WIP is accumulating across states.
Topic: Mindset
A product has three agile teams (API, UI, and QA) delivering weekly. Each team reports higher throughput after “optimizing” its own work, but stakeholders still complain that customer-requested features take weeks to reach production due to handoffs and rework.
Which evidence best validates real progress using a systems-thinking perspective?
Best answer: C
What this tests: Mindset
Explanation: Systems thinking validates improvement at the whole-product level, not within individual components. Measuring end-to-end lead time from request to production directly reflects whether handoffs, queues, and rework across teams are improving. This evidence aligns with stakeholder concerns about time to deliver customer value.
Systems thinking shifts decision-making from optimizing isolated parts (individual team output) to optimizing the performance of the whole delivery system (value stream). In this scenario, stakeholders care about how long it takes for a request to become working software in production, and the delays are caused by cross-team handoffs and rework. An end-to-end flow metric like lead time validates whether the system is improving where it matters: faster, smoother delivery of customer value across API, UI, and QA.
A practical approach is to:
Component-level output can rise while overall delivery stays slow, which is the classic local-optimization trap.
It measures the whole system’s flow and reveals whether local optimizations improve overall value delivery.
Topic: Product
An agile team is building a self-service onboarding portal for new enterprise customers. Sales wants “more features,” Support wants “fewer tickets,” and the product sponsor says the portal must “deliver value quickly.”
As the agile practitioner, which TWO actions best define product value in terms of outcomes and success criteria?
Correct answers: D, E
What this tests: Product
Explanation: In agile product work, value is defined by the outcomes achieved for users and the business, not by output or activity. The strongest definition pairs clear outcome statements with measurable success criteria, then uses empirical review of those metrics to confirm whether increments are actually producing the intended results.
Value for a product is best expressed as observable outcomes (what changes for customers/users or the business) plus explicit success criteria (how you will know the change happened). In the scenario, stakeholders are describing outputs (“features”) and hoped-for benefits (“fewer tickets”), so the practitioner should align them on outcomes and make success measurable.
Practical steps include:
This keeps prioritization and feedback focused on results rather than shipping scope or maximizing internal activity measures.
Value is clarified by agreeing on desired outcomes and how success will be measured.
Success criteria become actionable when targets are set and validated with data over time.
Topic: Delivery
An agile team is starting iteration planning. The product owner wants a high-priority “new payment gateway” story started this iteration to support a release forecast due in 2 weeks. The team has never integrated with this gateway and is unsure about API rate limits and security token handling. The team has a WIP limit of 3 and a Definition of Done that requires automated integration tests.
What is the BEST next action to reduce this risk while protecting flow and quality?
Best answer: D
What this tests: Delivery
Explanation: A timeboxed spike (experiment) is the fastest way to convert unknowns into actionable knowledge when integration, security, and testability are unclear. By defining a learning objective (e.g., authenticate, make a sample transaction, prove an automated test path) and sharing results, the team can refine acceptance criteria and estimates without overcommitting WIP. This preserves flow and supports the near-term release forecast with better data.
When a backlog item carries significant technical or requirements uncertainty, the agile approach is to run a small, timeboxed spike/experiment focused on learning, not delivery. In this scenario, unknown API limits and token handling directly affect feasibility, test strategy, and effort, and the team’s DoD requires automated integration tests. A spike with a clear hypothesis and exit criteria (what you must learn/prove) can quickly surface constraints, inform design/testing choices, and enable accurate refinement and re-forecasting for the 2-week release window.
Typical spike outcomes include:
The key takeaway is to timebox learning to reduce uncertainty before committing to a large slice of delivery work.
A short, goal-driven experiment reduces technical uncertainty quickly so the team can re-estimate and plan the real work without breaking WIP or DoD.
Topic: Leadership
Several stakeholders say they “don’t trust what the team says” because decisions happen in side conversations and improvement items from retrospectives never get finished. As the agile lead, you introduce a visible decision log, make team agreements explicit, and review owners/status of retro action items every iteration until they are done.
Which agile practice or technique does this best map to?
Best answer: D
What this tests: Leadership
Explanation: This approach builds trust by increasing transparency (visible decisions and agreements), clarity (explicit expectations), and reliability (tracking action items to completion). The key is consistent follow-through so stakeholders see commitments being honored and learning turning into action.
Trust is strengthened when teams make expectations and decisions explicit and then consistently deliver on what they said they would do. A visible decision log reduces “hidden” agreements, and explicit working agreements clarify how the team will operate. Reviewing retrospective action items each iteration with clear ownership and status creates a closed loop, showing stakeholders that improvement commitments are real and that the team is dependable.
A reliability pattern like this works because it reduces ambiguity and surprises, and it creates predictable behavior stakeholders can count on over time.
Making agreements and decisions transparent and consistently closing the loop builds credibility and trust.
Topic: Mindset
A cross-functional team building an internal platform hears frequent complaints from stakeholders: “We can’t tell what’s in progress, what’s done, or what’s blocked.” The team currently sends a weekly email status summary, and impediments are often discovered late.
Which information radiator best supports making status, progress, and impediments visible day-to-day?
Best answer: A
What this tests: Mindset
Explanation: Use an information radiator that makes work and blockers observable at a glance, without needing a meeting or a report. A visible workflow board (task board/Kanban-style) shows what is in progress, what is done, and what is blocked in near real time. This directly increases transparency and enables faster swarm/decision-making around impediments.
Information radiators are lightweight, highly visible artifacts that make delivery status and problems obvious to the team and stakeholders. In this scenario, the key gap is delayed discovery of impediments and unclear day-to-day progress, so the best fit is a visible workflow board that reflects the current state of work.
A good workflow board typically makes these signals explicit:
Reports, traceability artifacts, and schedule baselines can be useful, but they are not optimized for continuous, real-time transparency about flow and impediments.
A workflow board radiates real-time status and highlights impediments by making blocked work explicit.
Topic: Product
You are facilitating backlog refinement for a product team. They have one 90-minute session before iteration planning, and the Product Owner wants “rough sizes” on many new items to support ordering.
Exhibit: Backlog snippet (new items)
Items to size today: 32
Detail level: mostly “As a…, I want…”; acceptance criteria TBD
Known work: 3 items similar to recently delivered stories
Team note: “Planning poker on 30+ items will take all day.”
Goal: relative sizing to enable prioritization
Which estimation facilitation approach should you use next?
Best answer: C
What this tests: Product
Explanation: The exhibit shows many new, low-detail backlog items and limited time, with a goal of rough relative sizing for prioritization. Affinity estimation (often with triangulation using a few known reference stories) is designed for quickly grouping and sizing a large set of items without deep discussion on each one.
Use planning poker when the team needs a shared understanding and consensus on a smaller set of reasonably well-understood items, because the discussion is the value and it takes time. The exhibit instead points to many items with acceptance criteria still TBD and a time-boxed need for “rough sizes” to support ordering. Affinity estimation fits this situation because it enables fast, relative sizing by sorting items into size buckets and then calibrating with a few reference stories the team already knows.
A practical flow is:
Key takeaway: match the facilitation technique to the purpose—speedy coarse sizing across many items vs. deeper shared understanding for fewer items.
Affinity estimation quickly creates relative sizes across many low-detail items using known reference stories.
Topic: Mindset
An agile team has missed its last two iteration goals. Work in progress keeps growing, defects are increasing, and there is frequent tension between developers and testers about “who caused” issues.
In daily syncs, most team members stay quiet and wait for the tech lead to speak. When a defect escapes, the manager asks in a group chat for the name of the person responsible. Several risks raised earlier have stopped being mentioned.
What is the most likely underlying cause?
Best answer: D
What this tests: Mindset
Explanation: The repeated silence, reliance on a single voice, and public blame signals indicate low psychological safety. When people fear negative consequences for surfacing problems, they withhold risks and impediments until late, which drives growing WIP, rework, and missed iteration goals. The delivery symptoms are consistent with a safety problem rather than a purely process or planning issue.
Psychological safety issues show up as inhibited communication: people stop raising risks, avoid sharing bad news, and let one or two “safe” voices speak for the group. In this scenario, public calls for “who is responsible” and recurring blame language create fear, which reduces early escalation and cross-role collaboration.
When impediments and quality concerns are hidden until late, the team accumulates partially done work (higher WIP), discovers defects later (more rework), and misses iteration goals. Process tweaks (like WIP limits or tighter quality rules) can help, but they won’t stick if people don’t feel safe to surface problems and challenge decisions. The first need is to restore a safe environment for transparency and learning.
Fear of blame suppresses speaking up, so problems surface late as rework, growing WIP, and missed goals.
Topic: Delivery
A Kanban team is delivering a customer-facing API. Over the last month, several items sat in “Code Review” for days, creating a delivery risk and frequent carryover.
The team added an explicit WIP limit for “Code Review,” swarms to clear blocked work daily, and asks the reviewer to pull the next item immediately when capacity frees up. After two weeks, the agile practitioner wants evidence to decide whether these actions are reducing the risk and whether further adjustment is needed.
Which metric or artifact best validates improvement in this context?
Best answer: C
What this tests: Delivery
Explanation: To monitor an impediment and adjust actions, you need evidence that the constraint is actually improving the system’s ability to deliver. A cycle-time trend that includes blocked time provides direct, outcome-based feedback on whether work is flowing faster through the “Code Review” step and whether the delivery risk is being reduced.
When a delivery risk is caused by an impediment in the workflow, the most useful validation is a flow metric that reflects end-to-end outcomes, not activity volume. Here, the bottleneck is time in “Code Review,” so the team should inspect whether items are spending less elapsed time in the system and less time blocked/queued.
A control chart (cycle time over time) is strong evidence because it:
Measures of effort, attendance, or internal activity can look “busy” without proving that delivery risk and flow are improving.
Cycle time (including blocked time) directly shows whether the review bottleneck is shrinking and flow risk is decreasing.
Topic: Leadership
During a retrospective, developers say QA is “blocking releases with nitpicky defects,” while QA says developers “throw work over the wall unfinished.” The functional manager asks you to step in and “settle the argument” so delivery can speed up.
What should you ask or verify FIRST before deciding how to address the conflict?
Best answer: B
What this tests: Leadership
Explanation: This conflict pattern often signals a systemic issue rather than a one-off interpersonal disagreement. Before facilitating solutions, confirm whether role clarity and performance incentives are creating competing goals (for example, one group rewarded for speed and another for defect prevention). That information determines whether you need to adjust policies and working agreements, not just mediate behavior.
When a conflict repeatedly shows up as “they block us” vs. “they dump incomplete work,” it’s frequently caused by system design: unclear boundaries/ownership (who defines and verifies “done”) and misaligned incentives (how people are measured and rewarded). Asking first about roles and incentives helps you determine whether the team is optimized as a whole or split into competing sub-optimizations.
Once understood, you can facilitate updates such as:
Flow metrics and customer impact can be useful later, but they don’t explain why the groups are incentivized to blame each other.
Misaligned role clarity or incentives can drive recurring blame, so validate these before proposing fixes.
Topic: Delivery
Midway through an iteration, your team demos a thin slice of a new onboarding flow to a pilot customer. The customer says step 2 is confusing and asks for inline guidance, and they also request a less important UI theme change. The team’s WIP limit is already reached and the iteration goal is “reduce onboarding drop-off.” The product owner asks what to do next.
What is the BEST next action?
Best answer: A
What this tests: Delivery
Explanation: The best response is to translate the customer’s feedback into clear backlog items and re-prioritize based on value toward the iteration goal. Because WIP is already at the limit, the team should negotiate scope by swapping work rather than starting additional items. This keeps flow stable while maximizing learning and outcome impact from the pilot feedback.
Customer feedback is most valuable when it is quickly turned into specific backlog items and used to adjust priorities based on outcomes. Here, the pilot feedback directly affects the iteration goal (drop-off), so it should be made actionable (e.g., a story for inline guidance with acceptance criteria) and prioritized above lower-value cosmetic requests. Since the team is already at its WIP limit, the right move is not to start more work, but to renegotiate what is in progress or planned for the iteration (swap, split, or stop lower-value work) while keeping quality and agreements intact.
A key takeaway is to refine and reorder the backlog using feedback, then manage flow via WIP-aware tradeoffs rather than adding work on top of existing commitments.
It converts feedback into actionable, prioritized backlog items while respecting WIP and focusing on the iteration goal.
Topic: Product
A team is building an employee self-service portal. After three iterations, each increment has been deployed, but analytics show <5% adoption and HR call volume has not decreased (the agreed outcome).
Constraints:
What is the BEST next action?
Best answer: D
What this tests: Product
Explanation: The increment is being delivered but not creating the intended outcomes, which is a clear signal that it is not delivering value. The best next step is to re-establish a fast feedback loop with real end users, learn what is blocking adoption, and then adapt backlog priorities to maximize value within the enrollment deadline.
When increments are “done” but outcomes (adoption, reduced calls, improved task completion) do not move, the problem is usually value discovery/fit rather than delivery volume. The fastest corrective action is to validate the current increment with the target users, identify what prevents them from succeeding, and then adjust the backlog (and acceptance criteria) to optimize for the outcome that stakeholders care about. This also supports flow: by learning what to change next, the team can stop starting low-confidence work and focus on finishing the highest-value improvements within WIP limits. The key takeaway is to use evidence-based feedback to adapt the next increment, not to push more scope or weaken quality controls.
Because the increment is not producing the agreed outcomes, immediate user validation guides backlog changes toward value before starting more work.
Topic: Leadership
A product team’s payments service has had several production incidents caused by rushed hotfixes. Stakeholders are pressuring for a release freeze, but the product owner insists on continuing to deliver high-value features.
As the agile practitioner, you facilitate a resolution strategy of smaller, safer releases with rapid rollback so the team can keep delivering while reducing disruption. Which metric/evidence best validates that this strategy is working?
Best answer: D
What this tests: Leadership
Explanation: To validate a strategy aimed at maximizing value with minimal disruption, use outcome-based evidence tied to release safety and recovery. Tracking change failure rate and mean time to restore (MTTR) shows whether smaller releases and rollback are actually reducing customer impact while still allowing frequent delivery.
When resolving a delivery-vs-stability conflict, the goal is to keep value flowing while reducing the disruption customers feel. The best validation evidence is an outcome metric directly connected to the risk you’re mitigating (production incidents) and the capability you’re trying to preserve (ongoing delivery). Change failure rate shows whether deployments are becoming safer, and MTTR shows whether any failures are being contained and recovered quickly—together validating that the resolution strategy is delivering value with less disruption.
Prefer evidence that is:
Activity outputs can support improvement work, but they don’t validate that customers experience fewer disruptions.
These outcome-focused measures show whether safer releases reduce customer-impacting disruption while enabling continued delivery.
Topic: Mindset
An agile team uses a Kanban board and cumulative flow diagram. For the last three iterations, the team size and planned scope have stayed about the same, yet they have missed the iteration goal twice.
Signals on the board show WIP rising from 6 to 14 items and blocked items rising from 1 to 7. Many blocked items are waiting on clarifications or handoffs between developers and testers, and the Daily Scrum often turns into arguments about whether an item is “done enough” to move forward. Team members frequently pull new work when their current item is blocked because they “don’t want to be idle.”
What is the most likely underlying cause the agile practitioner should address first?
Best answer: A
What this tests: Mindset
Explanation: Rising WIP plus growing blockers and repeated “done” disputes point to missing explicit working agreements that govern flow and quality. When teams lack WIP limits and shared exit criteria, they tend to start new items instead of swarming to unblock and finish, which increases handoffs, rework, and missed goals.
These transparency signals (increasing WIP, more blocked work, recurring rework, and conflict at handoffs) most strongly indicate weak or missing explicit policies for how work flows. If the team hasn’t agreed on WIP limits and clear “done/ready” criteria, individuals will default to maximizing utilization (pulling new work when blocked), which inflates WIP and amplifies coordination and quality friction.
An appropriate immediate response is to facilitate an explicit policy reset focused on flow:
This addresses the system behavior driving the visible symptoms rather than treating the symptoms themselves.
Without explicit WIP limits and a shared definition of done, the team optimizes for staying busy, creating excess WIP, more handoff friction, and rework.
Topic: Delivery
A team uses a Kanban-style board for production support. Over the last two weeks, their cumulative flow shows WIP rising from 8 to 18 items, and several tickets have been “In Progress” for 10+ days. Stakeholders are complaining about unpredictable delivery.
Which TWO actions should the agile practitioner recommend to correct the overload and restore flow? (Select TWO)
Correct answers: B, D
What this tests: Delivery
Explanation: Rising WIP and aging items indicate the system is overloaded and work is being started faster than it is finished. The best correction is to reduce WIP by finishing started work and to enforce WIP limits so the team can sustain a pull-based flow. These actions directly address the root cause rather than masking it with new lanes or statuses.
Aging work items plus rising WIP are classic flow signals of overload: too many items are in progress, queues grow, and cycle time becomes unpredictable. In a pull system, the fastest way to recover is to stop starting and start finishing, especially the oldest or blocked items, to reduce queues and unblock flow. Then, make the policy explicit by enforcing WIP limits that match real capacity so the team cannot continually overcommit by pulling more work than it can complete.
Key takeaway: reduce WIP now and prevent it from creeping back up with clear WIP-limit policies.
Reducing WIP immediately by finishing work-in-progress shortens cycle time and relieves overload.
WIP limits prevent starting too much work, making overload visible and stabilizing flow.
Topic: Delivery
Mid-iteration, an agile team learns that a third-party service they integrate with will deprecate an endpoint in two weeks. The team is already seeing intermittent failures in the test environment, and the Sprint Goal depends on this integration.
As the agile practitioner, which action SHOULD AVOID to manage this risk and impediment?
Best answer: A
What this tests: Delivery
Explanation: When new information changes the likelihood or impact of a risk, the team should respond quickly by making it visible and adapting actions. The scenario shows an active impediment affecting delivery, so postponing discussion until the retrospective is an anti-pattern. Effective risk management in agile emphasizes rapid inspection, transparency, and timely adaptation to protect outcomes.
Agile risk and impediment management relies on short feedback loops: as soon as new information increases risk exposure, the team inspects the situation and adapts. Here, endpoint deprecation plus current failures signals both a present impediment and a near-term risk to the Sprint Goal, so the right response is to surface it, collaborate on options, and adjust work.
Appropriate immediate responses include:
Waiting for a later ceremony defers learning and action, increasing failure probability and reducing available options.
Delaying action ignores new information and increases the chance of missing the Sprint Goal.
Topic: Product
An agile team is preparing for sprint planning in 2 days. The top backlog item is written as a single story: “Enable customer onboarding,” and it includes UI changes, API work, data migration, and integration with a new identity provider. The team says they cannot size it and is worried about unknowns in the identity-provider integration.
What is the best next step?
Best answer: C
What this tests: Product
Explanation: The item is too large and mixes multiple concerns, making it hard to estimate and plan. The best next step is to refine it by splitting into thin slices of user value while also carving out technical enabling or discovery work to address the highest-risk integration unknowns before committing. This improves forecastability and preserves the ability to deliver value incrementally.
Backlog refinement is the right moment to split oversized items so the team can forecast and the product can deliver value incrementally. When a single story combines multiple workflows (UI, API, migration, integration) and contains significant uncertainty, split it into thin vertical slices that each deliver user-observable value, and separate out technical enabling/discovery work (for example, a spike or an integration enabler) to reduce risk early.
A practical sequence is:
Deferring splitting until after commitment or escalating prematurely reduces transparency and increases planning risk.
Splitting by user value and isolating the risky/technical enabling work creates smaller, estimable items and surfaces uncertainty before sprint planning.
Topic: Delivery
After a pilot release, two key stakeholders give conflicting feedback. Sales wants a new discount workflow to close deals; Support wants stability work to reduce ticket volume. The team has capacity for only one change next iteration and must meet a fixed regulatory reporting deadline.
Which agile facilitation practice best matches how the agile practitioner should handle this conflict?
Best answer: C
What this tests: Delivery
Explanation: When feedback conflicts, agile decision-making should be anchored to shared goals and informed by evidence. By explicitly surfacing constraints (capacity and regulatory deadline) and reviewing pilot data, the group can make a transparent trade-off and preserve focus on delivering the most valuable outcome next.
The core concept is evidence-based, goal-driven prioritization: when stakeholders disagree, the team and product ownership avoid opinion battles by using objective criteria. In this scenario, the practitioner should facilitate alignment on (1) the product/iteration goal, (2) what the pilot data and feedback indicate about outcomes (revenue impact, ticket drivers, risk), and (3) constraints such as capacity and the regulatory deadline. With these inputs visible, stakeholders can make an explicit trade-off and select the next increment that best advances the goal while respecting constraints.
A decision made this way is transparent and repeatable, and it supports early feedback by treating it as data for learning rather than a mandate to satisfy every request immediately.
It aligns stakeholders on objective criteria (goal, evidence, constraints) to choose the highest-value next step.
Topic: Product
A product team is under pressure to deliver new features, but over the last two releases it has become harder to make changes and hotfixes are more frequent. The team started investing in automated tests and refactoring each iteration while still releasing features.
Which evidence best validates they are balancing short-term delivery with long-term product maintainability?
Best answer: C
What this tests: Product
Explanation: To validate balance, you need evidence that delivery remains fast while the product stays stable as changes are made. A trend of lead time for change paired with change failure rate demonstrates whether maintainability investments are improving the ability to deliver quickly without increasing failures. This directly reflects long-term product health without relying on activity or output counts.
Balancing short-term delivery with long-term product health requires evidence that the system remains easy and safe to change while still delivering value. Metrics that combine speed and stability are strong validation because maintainability shows up as faster, more predictable change delivery with fewer failures.
A practical validation approach is to inspect trends over multiple releases/iterations:
If lead time improves or stays healthy while change failure rate decreases or stays low, the team is improving maintainability without sacrificing delivery. Pure output or activity measures can increase even when the codebase is getting harder to change.
Together these show delivery speed and whether changes remain safe and maintainable over time.
Topic: Leadership
A delivery team supports an internal platform. Recently, production issues are frequently “saved” by one senior engineer who fixes problems alone while the rest of the team waits for direction. Cycle time and morale are suffering, and leadership wants to promote collective ownership of outcomes instead of individual heroics.
Which practice is the BEST match for this need?
Best answer: D
What this tests: Leadership
Explanation: Swarming is designed to move work out of individual silos by having multiple team members collaborate on the most important item until it is finished. This reduces reliance on a single “hero,” spreads context, and reinforces that the team—not an individual—owns outcomes.
Collective ownership is strengthened when the system makes “one person saves the day” unnecessary. Swarming is a simple, visible policy: when a critical issue (or top backlog item) is pulled, the team collaborates to get it to “Done” rather than starting parallel work and waiting on a specialist. This accelerates learning, reduces bottlenecks created by expertise silos, and builds shared accountability for results.
Practically, the team can agree to:
Flow controls or rotating ownership can help, but they don’t directly replace heroics with shared responsibility the way swarming does.
Swarming shifts responsibility from an individual to the whole team by finishing critical work together and sharing knowledge.
Topic: Mindset
A manager asks your team to “pick an agile approach” for a new delivery group that will handle customer-reported production issues and small enhancements. Leadership is debating Scrum because they want predictability, while engineers prefer Kanban due to frequent interruptions.
Before recommending Scrum, Kanban, or a hybrid, what should you verify first?
Best answer: B
What this tests: Mindset
Explanation: Start by understanding the nature of the work: how much is unplanned, how variable it is, and what response-time expectations exist. Those constraints determine whether a pull-based flow system (often better for unpredictable interrupt work), timeboxed iterations (often better for more planned work), or a hybrid is most appropriate.
Selecting Scrum, Kanban, or a hybrid should be based on work type, volatility, and constraints. In this scenario, the biggest uncertainty is how unpredictable the incoming work is and whether there are explicit response-time expectations (for example, urgent production issues) that will disrupt timeboxed planning.
A good first verification is to understand:
With that information, you can reason toward Kanban for high-interrupt, continuous flow; Scrum for more stable, planable batches; or a hybrid to separate classes of work. Tooling or estimation details come later.
Work type volatility and response-time constraints drive whether flow-based Kanban, timeboxed Scrum, or a hybrid best fits.
Topic: Product
A team releases a new increment to customers every two weeks. The last three increments met the iteration goals and passed all acceptance tests, but usage analytics show the new features are rarely used and customer support is still dominated by the original pain points. Stakeholders are asking the team to “just deliver more” next iteration.
As the agile practitioner, what is the best next step?
Best answer: C
What this tests: Product
Explanation: The signals show a value problem (low adoption and unresolved customer pain), not a delivery problem. The most effective next step is to inspect actual outcomes with stakeholders using feedback and analytics, then adapt by re-prioritizing and refining the backlog before making new commitments. This restores a feedback-driven path to value.
Agile increments deliver value when they change customer outcomes (adoption, satisfaction, reduced pain), not just when they meet acceptance criteria. Here, the team is producing “potentially shippable” work, but evidence shows the product is not improving the customer experience.
The best next step is to run an explicit inspect-and-adapt loop on value:
Committing to more work or escalating governance before inspecting outcomes risks optimizing output while continuing to miss value.
The increment is “done” but not valuable, so the next step is to inspect real outcomes and adapt the backlog based on validated learning before committing to more work.
Topic: Delivery
Your team uses a Kanban system and is asked to improve delivery speed by visualizing end-to-end flow and addressing delays.
Exhibit: Board snapshot (today)
Stage WIP (limit) Avg age (days)
Ready 14 (20) 3
Dev 6 (6) 2
Code Review 4 (4) 2
Test 11 (5) 7
Deploy 1 (2) 1
Based on the exhibit, what is the best next action to reduce queues and delays?
Best answer: B
What this tests: Delivery
Explanation: The exhibit shows the biggest queue and longest aging work in Test, indicating a downstream bottleneck that is slowing end-to-end flow. The best action is to stop feeding the bottleneck and swarm to clear it, then learn what is causing the slow Test stage. This directly reduces lead time by removing the largest delay.
Visualizing end-to-end flow means looking across all workflow states for where work accumulates (queues) and where items sit the longest (aging). In the exhibit, Dev and Code Review are at their WIP limits with low average age, but Test is more than double its WIP limit and has the highest average age. That combination signals Test is constraining flow and causing delays for everything upstream.
A practical flow-based response is:
Raising WIP limits or pushing more work into the system typically increases wait time rather than improving flow.
Test is the clear bottleneck with WIP far over its limit and the highest item age, so focusing capacity there will reduce end-to-end delay.
Topic: Mindset
An agile team is split between a co-located group at headquarters and several members working remotely across time zones. The team has recently had surprises about priority changes and incomplete handoffs.
Which communication approach SHOULD AVOID to maintain transparency for everyone?
Best answer: C
What this tests: Mindset
Explanation: Transparency requires that work status, decisions, and changes are visible and accessible to all team members regardless of location. Approaches that depend on co-located, informal, in-person communication exclude remote participants and create uneven information flow. Using shared, persistent channels and intentional coordination practices supports a common understanding across the team.
To build transparency in distributed or hybrid teams, information needs to be both shared and persistent so people can access it asynchronously. Co-located teams can benefit from osmotic communication, but relying on it as the primary mechanism creates blind spots for remote members and increases the risk of missed priority changes and incomplete handoffs.
Effective strategies for transparency in hybrid settings include:
The key takeaway is to prefer inclusive, accessible communication patterns over location-dependent updates.
This creates an information gap for remote members because critical updates are not visible or persistent for everyone.
Topic: Mindset
Your agile team is moving from ad hoc status emails to lightweight transparency for an internal customer platform. Leaders are asking for “better visibility,” the product owner wants “fewer interruptions,” and a compliance partner says they need “regular updates.”
As the agile practitioner, what should you ask FIRST before deciding what to share and how often?
Best answer: A
What this tests: Mindset
Explanation: To build transparency, first clarify the purpose of information sharing: enabling stakeholders to make timely decisions. Asking who needs which information, tied to specific decisions and timing, lets you design an appropriate cadence and format without over-reporting. This prevents both interruptions and information gaps.
Transparency is most effective when it is “pullable” and decision-oriented rather than a one-size-fits-all reporting ritual. In this scenario, the requests are vague and potentially conflicting, so the first step is to clarify stakeholder groups, the decisions they need to make, and the time sensitivity of those decisions. From that, you can choose appropriate mechanisms (e.g., visible board, release notes, short demos, risk/issue signals) and frequencies (real-time, per iteration, monthly) that minimize interruption while meeting governance needs. Tool choice and report templates come after you understand the information demand and cadence drivers. The key is aligning information radiators and communication cadence to stakeholder decision cycles.
Information needs and frequency should be driven by stakeholder decisions and timing, so you must clarify who needs what to decide and when.
Topic: Leadership
Mid-iteration, two developers tell you they are “done” with a user story and want to pull the next item. The product owner says, “I can’t accept it yet; it doesn’t behave like we discussed,” but the story has only a short title in the backlog and no written acceptance criteria. As the agile practitioner, what should you do FIRST?
Best answer: D
What this tests: Leadership
Explanation: Before deciding whether the story is truly done or what to do next, the team needs a transparent, shared definition of what “done for this story” means. By first clarifying acceptance criteria with the product owner, the team can inspect the increment against agreed conditions and adapt quickly to close gaps. This prevents pushing ambiguity downstream as rework.
Transparency comes from making expectations explicit and visible so the team can inspect outcomes and adapt their plan. In this scenario, the core problem is ambiguous “done”: the story lacks acceptance criteria, so developers and the product owner are using different mental models.
The first move is to clarify what conditions make the story acceptable (acceptance criteria and, ideally, examples/tests). Then the team can:
Trying to plan rework, escalate, or change flow limits before aligning on acceptance criteria treats symptoms and reduces transparency, increasing the likelihood of churn and hidden work.
Clear, shared acceptance criteria make the work transparent and enable immediate inspection and adaptation before moving on.
Topic: Product
An agile team is building a patient-facing mobile app for a clinic. The product owner has been prioritizing the backlog using only user-visible features and revenue impact.
During a sprint review, an internal privacy officer notes that upcoming functionality will handle sensitive data and must meet new privacy and audit requirements to be compliant. The team has not been capturing these constraints in stories or acceptance criteria.
What is the best next step?
Best answer: C
What this tests: Product
Explanation: When security, privacy, compliance, or sustainability constraints materially affect outcomes, they must be part of how value is defined and prioritized. The most effective next step is to incorporate those constraints into backlog items (and their acceptance criteria/definition of done) and then re-order the backlog based on the updated definition of value. This keeps delivery aligned with both customer outcomes and required constraints.
In agile product management, “value” is not only user-visible features; it also includes constraints that protect the customer and the organization (privacy, security, compliance, sustainability) when they are relevant. Since the privacy officer identified requirements that affect whether the product can be legally and safely released, the next step is to inspect and adapt at the product level by updating the backlog.
A practical next step is to facilitate a refinement session with the product owner and appropriate specialists to:
Pushing these constraints to the end or outside the backlog delays feedback and increases rework and release risk.
Refining the backlog with privacy/compliance input updates value definition and ordering before more work is built.
Topic: Delivery
In agile delivery, what is the best way to use a team’s historical velocity (or Kanban throughput) when planning upcoming work?
Best answer: D
What this tests: Delivery
Explanation: Velocity and throughput describe what happened in the past and are useful for forecasting likely outcomes. The appropriate use is to forecast in ranges with uncertainty and update the forecast as conditions change and new data is available. This avoids turning metrics into commitments that drive unhealthy behavior.
Velocity (iterations) and throughput (items per time) are lagging indicators of a team’s recent delivery capability under specific conditions. Because the work, team capacity, and system constraints vary, these metrics should inform a forecast (often expressed as a range and confidence) and be re-forecast as new information appears. Treating a metric as a commitment encourages gaming, reduces quality, and hides risk, while forecasting supports transparent planning and trade-off decisions (scope/date/capacity) based on evidence.
Velocity/throughput is an empirical input for probabilistic forecasting, not a fixed promise of future delivery.
Topic: Product
A product team has delivered new features quickly for several iterations, but lead time is rising and defect fixes are consuming more effort each sprint. The product owner wants to keep the current feature delivery rate. The team proposes reserving a fixed portion of each iteration (for example, 15–20%) for refactoring and expanding automated tests, and tracking this work as backlog items alongside features.
Which agile technique best matches this approach?
Best answer: D
What this tests: Product
Explanation: This is an example of intentionally budgeting capacity for product health work (refactoring and test automation) while still delivering features. By making technical-debt/enabler items visible in the backlog and reserving capacity, the team creates a sustainable balance between short-term delivery and long-term maintainability.
Balancing near-term feature output with long-term product health often requires making maintainability work explicit and funding it continuously. Capacity allocation does this by reserving a predictable slice of delivery capacity for enablers such as refactoring, test automation, and reducing technical debt, while the remaining capacity is used for customer-visible features. This creates transparency for stakeholders (trade-offs are clear) and helps prevent quality and maintainability erosion that later slows delivery.
Key point: improving maintainability is treated as planned backlog work, not “extra” work squeezed in after features.
Explicitly reserving a percentage of iteration capacity for refactoring/testing balances near-term features with long-term maintainability.
Topic: Leadership
Mid-iteration, an intermittent performance issue is reported in production. The team can either do a risky multi-week refactor or try a small configuration change that might resolve most cases. The release is due in two weeks, and the product owner wants customer impact reduced quickly.
Which approach best matches an agile problem-resolution strategy that maximizes value while minimizing disruption?
Best answer: C
What this tests: Leadership
Explanation: A timeboxed spike supports fast learning with limited disruption, allowing the team to validate the smallest viable fix and reduce customer impact quickly. It preserves the release focus while creating evidence to decide whether a larger refactor is warranted. This aligns with agile’s preference for incremental, reversible change and rapid feedback.
When multiple solution paths exist, an agile value-maximizing resolution strategy is to reduce uncertainty cheaply and quickly, then decide based on evidence. A timeboxed spike (or small experiment) keeps the team from overcommitting to a disruptive, high-risk change while still addressing the most urgent customer impact.
A practical approach is:
This preserves flow toward the near-term release and avoids “big-batch” rework unless data shows it is necessary.
A timeboxed spike enables a data-driven, low-disruption experiment before committing to larger change.
Topic: Mindset
A product team is building a new mobile onboarding experience and wants to experiment early to reduce uncertainty. During planning, several team members describe how they will use an MVP, prototype, spike, and product increment.
Which statement is INCORRECT and should be avoided?
Best answer: A
What this tests: Mindset
Explanation: An increment is a slice of working product that meets the definition of done and is potentially releasable. Prototypes and spikes are primarily learning tools and may be throwaway, while an MVP is a minimal but usable product aimed at validated learning with real users. Skipping done-quality turns an increment into unfinished work rather than a usable product slice.
These four artifacts differ by intent and what you expect to learn. A spike is a time-boxed research activity to reduce technical uncertainty (feasibility, approach, risk); the output is learning, not a shippable feature. A prototype is a lightweight representation used to learn about usability or requirements (often without full integration) and may never be shipped. An MVP is the smallest end-to-end product that real users can use to validate assumptions about value and adoption. A product increment is the integrated result of completed work that meets the definition of done and is potentially releasable; learning can come from releasing or reviewing it, but it cannot bypass quality expectations.
Key takeaway: treat “increment” as done, integrated product—not a learning-only partial build.
A product increment must meet the team’s definition of done and be potentially releasable; learning does not justify incomplete quality.
Topic: Mindset
A cross-functional team has stopped proposing experiments after a recent production issue led to finger-pointing in a stakeholder meeting. The product owner wants the team to resume trying small improvements, but team members say, “If it doesn’t work, we’ll get blamed again.”
Which practice is the BEST match to address this and keep experimentation and learning possible?
Best answer: D
What this tests: Mindset
Explanation: Fear of failure blocks experimentation because people perceive high personal risk. A safe-to-fail experiment explicitly limits blast radius with guardrails and reframes outcomes as learning against a hypothesis. This restores psychological safety while still allowing the team to try, observe, and adapt.
The core need is psychological safety: people must believe they can take reasonable risks without punishment. A safe-to-fail experiment is designed to make risk manageable and learning explicit by constraining impact and defining what will be learned regardless of outcome.
Use a lightweight experiment charter such as:
By shrinking scope and making “failure” an expected data point, the team can re-engage in experimentation even after a blame-filled incident; broader flow or analysis activities don’t directly reduce the fear of being blamed.
Small, reversible experiments framed around learning reduce perceived personal risk and make trying again acceptable.
Topic: Mindset
A product team says their last three retrospectives were “good conversations,” but the same impediments keep returning and no one remembers what was decided. You are facilitating the next retrospective and want evidence afterward that the session produced actionable improvements with clear ownership.
Which artifact/metric best validates this?
Best answer: D
What this tests: Mindset
Explanation: The strongest validation is evidence that improvement experiments were made explicit, assigned to specific people, and are being followed up to completion. A visible action log (or improvement backlog) with owners, due dates, and current status shows accountability and enables inspection and adaptation in the next retrospective. This directly reflects actionable outcomes rather than activity or conversation volume.
Retrospectives create value only when they result in a small set of clear, doable improvement actions that the team can execute and inspect. The best evidence is an improvement/action log that makes ownership and follow-up explicit: each item is specific, has a named owner, a target date or checkpoint, and a transparent status (done/blocked/in progress). This supports accountability without blame and enables the team to start the next retrospective by reviewing outcomes and deciding whether to continue, adapt, or stop an experiment.
Key takeaway: validate retrospectives by tracked, owned actions (and their follow-up), not by how much was discussed or how many people attended.
A tracked action log with explicit ownership and follow-up demonstrates that improvements are concrete, assigned, and progressing.
Topic: Mindset
Midway through a two-week iteration, an urgent production security patch arrives. The team agrees to pull it in using an expedited lane with a strict WIP limit of 1 and to swap out an equivalent amount of planned work to keep focus.
As the agile practitioner, which metric/evidence best validates that the team protected flow while handling the urgent work?
Best answer: A
What this tests: Mindset
Explanation: When urgent work interrupts planned work, the key risk is hidden WIP growth and longer cycle times. A cumulative flow diagram provides direct evidence of whether the expedited work caused bottlenecks, queue buildup, or slowed completion. Stable bands and unchanged cycle time indicate the team maintained flow despite the interruption.
To validate that urgent work was absorbed without harming delivery, focus on flow-based evidence rather than activity counts. A cumulative flow diagram (CFD) makes flow visible by showing how much work sits in each state over time and whether any column widens (growing WIP/queues). It also lets you validate the impact on cycle time and throughput before and after introducing the expedited lane.
If the team truly protected flow, you should see:
Activity outputs can increase even while flow degrades, so flow analytics are the most reliable validation here.
A cumulative flow diagram validates flow by revealing WIP buildup and changes in cycle time when urgent work is expedited.
Topic: Leadership
During a retrospective, two developers start blaming each other for recent production defects. Voices rise, others go silent, and the discussion is no longer about improving the process. As the agile practitioner facilitating, which action SHOULD AVOID to de-escalate and keep the discussion productive?
Best answer: D
What this tests: Leadership
Explanation: To de-escalate using emotional intelligence, the facilitator should preserve psychological safety, acknowledge emotions, and guide the group back to shared goals and facts. Publicly singling out or shaming a person typically triggers defensiveness and intensifies conflict. Productive conflict management focuses on behaviors and outcomes without attacking individuals.
Emotional intelligence in conflict management means noticing emotional cues, regulating your own response, and responding in a way that reduces threat and restores collaboration. In a heated retrospective, effective facilitation includes neutrally describing observations (tone, interruptions), validating that concerns exist, and structuring turn-taking so each person can be heard. Then the conversation can be redirected from blame to learning by grounding in evidence (defect data, workflow constraints) and reconnecting the group to shared goals and working agreements. Publicly calling out an individual as “the problem” attacks identity, reduces psychological safety, and usually escalates the argument rather than enabling constructive problem-solving.
Publicly shaming a person increases defensiveness and escalation rather than restoring psychological safety and productive dialogue.
Topic: Product
A team is upgrading a live payment platform with a new routing component. A defect could misroute transactions, so operational risk is high. Customer support and compliance want the change introduced gradually with clear visibility and the ability to stop quickly if issues appear, while still delivering value as soon as possible.
Which approach is the BEST match?
Best answer: D
What this tests: Product
Explanation: When operational risk is high and stakeholders need tight control, releasing incrementally is preferred over batching. A canary-style rollout with feature toggles and monitoring delivers value earlier while limiting blast radius. It also supports rapid rollback or stopping the rollout if real-user signals indicate issues.
The core decision is whether to release incrementally or in a batch based on risk and stakeholder needs. Here, the change affects a critical production flow and a failure could have significant impact, so minimizing blast radius matters more than optimizing for a single launch event. A canary/gradual rollout (often enabled by feature toggles) lets you expose the increment to a small segment first, monitor leading indicators (errors, customer tickets, transaction anomalies), and expand only when the system proves stable.
A practical incremental release pattern is:
Key takeaway: high-risk production changes favor incremental rollout with fast feedback and reversibility, not large batch releases.
A controlled, incremental rollout reduces risk while meeting stakeholders’ need for early value and rapid stopping/rollback.
Topic: Mindset
An agile team is building a new internal reporting portal. In refinement, the team has spent the last week debating microservices vs. monolith and planning for “10,000 concurrent users,” but no working slice has been shown to users yet. The only data is a stakeholder guess; actual usage patterns are unknown. Morale is dropping because decisions feel stuck.
What is the best next step to shorten the feedback loop and avoid premature optimization?
Best answer: C
What this tests: Mindset
Explanation: The team is blocked by speculation, so the fastest way forward is to generate validated learning. A short, timeboxed spike focuses on the riskiest assumptions (e.g., expected load and constraints) and produces evidence the team can inspect and adapt from. This shortens the feedback loop and prevents optimizing for a problem that may not exist.
The core issue is over-analysis driven by unvalidated assumptions, which delays learning and real progress. In agile, the best next step is to reduce uncertainty with the smallest, fastest learning activity that informs a decision. A timeboxed spike (or experiment) targets the highest-risk assumption—here, performance and usage patterns—and produces data (prototype results, measurements, clarified constraints) the team can use to make a just-enough architecture decision.
A good spike outcome is a clear decision trigger, such as:
The takeaway is to optimize for learning and feedback first, then evolve the solution based on evidence.
A timeboxed spike generates validated learning quickly, letting the team decide based on evidence rather than speculation.
Topic: Mindset
Midway through a two-week iteration, an external partner reports a likely security vulnerability that could expose customer data if released. The team is already working on three low-value enhancements, one of which is 90% done.
The product owner reviews current work-in-progress with the team and decides to finish the nearly-done item, pause the other two, and re-order the backlog so the security fix is started next to reduce exposure risk and maximize near-term value.
Which agile technique or mindset element best matches this approach?
Best answer: A
What this tests: Mindset
Explanation: The team is using continuous, just-in-time planning to respond to new information. By weighing value and risk, they choose to complete only the nearly-finished work and then re-plan to address a higher-risk, higher-value item next. This reflects agile’s emphasis on adaptability and optimizing outcomes over sticking to an outdated plan.
This scenario demonstrates value- and risk-based prioritization combined with rolling-wave (just-in-time) re-planning. Agile teams keep plans lightweight and update them when new information changes the best path to delivering value or reducing risk. Here, the product owner and team explicitly evaluate the current WIP, complete what is nearly done to avoid waste, and then re-sequence the backlog so the security fix is tackled next because it has high risk and high customer impact. The key is making a deliberate trade-off between finishing WIP and changing direction, guided by value and risk rather than by a fixed plan.
They intentionally adjust the plan and WIP based on current value and risk, finishing only what is near done before switching.
Topic: Mindset
During a retrospective, one senior engineer repeatedly interrupts, dismisses ideas with sarcasm, and asks “Who caused this defect?” The room goes quiet and several people stop contributing.
As the facilitator, what practice best matches the team’s immediate need?
Best answer: A
What this tests: Mindset
Explanation: The immediate problem is psychological unsafety caused by blaming and dismissive behavior. The best facilitation move is to pause and reset (or create) working agreements that define respectful interaction and then actively enforce them so the team can re-engage. This directly targets the undermining behaviors before continuing with problem-solving.
Psychological safety is undermined by interpersonal behaviors like blame, sarcasm, and interruptions because they increase perceived risk of speaking up. In that moment, the facilitator’s job is to stop the harm, make expectations explicit, and protect participation.
Resetting working agreements works because it creates a shared, team-owned standard for how to talk about issues (e.g., focus on systems over individuals, one speaker at a time, assume positive intent) and gives the facilitator permission to intervene when the agreement is violated. Once safety is restored, the team can return to learning-focused techniques (like root-cause analysis) without reinforcing a blame culture.
Re-establishing explicit, shared norms and enforcing them addresses the unsafe behaviors immediately so people can participate without fear.
Topic: Mindset
During a daily stand-up, two developers start blaming each other about a defect and their voices escalate. One makes a personal jab (“Maybe if you actually listened, we wouldn’t be rework-ing this”), and the rest of the team goes silent; one developer looks visibly upset and stops speaking. The team’s working agreement includes “address issues respectfully” and “discuss disagreements directly, not through blame.” A customer demo is in 48 hours and the team is already above its WIP limit due to a blocked story.
As the agile practitioner, what is the BEST next action?
Best answer: D
What this tests: Mindset
Explanation: The most effective next step is to stop the interaction that is causing harm and re-establish safety before expecting collaboration. A brief pause and private check-ins reduce defensiveness and help you understand impacts and needs. Then a structured, respectful conversation anchored in working agreements enables the team to return to flow and meet the near-term demo goal without normalizing blame.
Psychological safety is a prerequisite for productive collaboration; when a conflict becomes personal and the team goes silent, the immediate priority is to interrupt the harmful dynamic and create a safe path to resolution. The best next action is to pause the meeting, take a short break, and do quick 1:1 check-ins to understand what happened and what each person needs to re-engage. Then facilitate a timeboxed, structured conversation that references the team’s working agreement, focuses on observable facts and impacts, and ends with a concrete next step (e.g., how they will collaborate to remove the blocker and keep WIP under control). This addresses safety while still protecting delivery by restoring teamwork and clarity quickly. The key takeaway is to address the conflict promptly and constructively, not defer it or force a performative resolution.
It immediately restores psychological safety, prevents further harm, and then enables a constructive resolution aligned to the team’s working agreement.
Topic: Mindset
A product team is asked to create a new self-service onboarding flow for a B2B SaaS product. Leadership agrees the business goal (reduce time-to-value), but the team has high uncertainty about users’ real pain points and which solution approach will work. They have only 3 weeks to generate evidence before funding decisions.
Which action SHOULD AVOID when selecting a discovery technique to shorten feedback loops?
Best answer: C
What this tests: Mindset
Explanation: With high uncertainty about user problems and the right solution, the team should use discovery approaches that create fast evidence through research, prototypes, or experiments. Techniques from design thinking and lean startup are designed to shorten feedback loops and reduce risk early. The anti-pattern is locking in requirements and postponing validation until after building.
When uncertainty is high, choose discovery techniques that maximize early learning and minimize cost of change. Design thinking helps when the problem space is unclear by rapidly building empathy, reframing the problem, and testing low-fidelity prototypes with users. Lean startup helps when you can express assumptions as hypotheses and run build-measure-learn experiments (often via an MVP) to validate value and usability with real behavior.
Deferring user validation until after a full build lengthens the feedback loop and increases the risk of building the wrong thing; it also reduces the team’s ability to pivot based on evidence. The goal under the time constraint is fast, direct feedback from actual users or realistic proxies, not document-driven certainty.
It delays learning by replacing rapid experiments with upfront specification and late feedback.
Topic: Delivery
A team is preparing for a stakeholder review. A stakeholder says the last increment was “not really ready” because it didn’t include performance improvements, but the team says it met expectations.
Exhibit: Backlog item & team DoD (excerpt)
Story: As a customer, I can download my monthly statement.
Acceptance criteria:
- Download completes in < 10 seconds for 95% of requests
- Includes PDF and CSV formats
- Audit log entry created
Definition of Done (team):
- Code reviewed, merged to main
- Unit + integration tests passing
- Deployed to staging
- PO acceptance recorded
What should the agile practitioner recommend next?
Best answer: B
What this tests: Delivery
Explanation: Use the story’s acceptance criteria to ground the stakeholder conversation in what was agreed and objectively verifiable, reducing subjective “ready” debates. If performance checks should apply to all work going forward, that belongs in the team’s Definition of Done and should be explicitly agreed to and then used to anchor future feedback.
Acceptance criteria define what “done for this item” means and are the primary anchor for stakeholder review feedback on a specific story. Here, performance is explicitly part of this story’s acceptance criteria, so the next step is to validate the increment against those criteria (e.g., evidence from staging) and use that to decide whether the story is accepted or needs rework.
The Definition of Done is a cross-cutting quality baseline applied to all items. If stakeholders repeatedly expect performance verification as part of “ready,” the team can reduce ambiguity by collaboratively updating the DoD to include an agreed performance-check activity (and ensuring it’s feasible within normal delivery flow). The key is to use acceptance criteria for the current decision and evolve the DoD intentionally for future consistency.
Acceptance criteria anchor whether this story meets expectations, while DoD changes should be an explicit team decision to reduce future ambiguity.
Topic: Mindset
A product team is building a new, AI-assisted triage feature in a customer support platform. The work is highly uncertain and the team has never integrated the chosen model provider before.
A stakeholder asks the team to create a detailed 4-month plan with fixed scope and dates before any development starts. The team agrees and begins writing detailed requirements, but questions and unknowns keep surfacing.
As the agile practitioner, what is the best next step?
Best answer: B
What this tests: Mindset
Explanation: In complex work, predictive planning based on early guesses increases the risk of rework and false certainty. The next best step is to run a timeboxed experiment to test key assumptions (e.g., integration, performance, data needs) and use the results to refine the backlog and plan iteratively. This mitigates risk by learning fast and adapting based on evidence.
Complex problems have high uncertainty and emergent requirements, so a predictive approach (detailed scope and dates before learning) tends to create the illusion of certainty, drive premature commitments, and increase rework when new information appears. The mitigation is to shorten feedback loops and turn unknowns into testable assumptions.
A practical next step is to run a timeboxed spike to:
This preserves agility by basing planning on evidence and enabling ongoing inspection/adaptation rather than locking in an early guess.
A short experiment (spike) replaces speculative upfront planning with evidence, enabling inspection and adaptation in a complex domain.
Topic: Delivery
You are supporting a product team that is about to commit the next iteration scope. The team wants to “move fast” but has limited direct customer contact.
Exhibit: Product backlog excerpt
1) One-click CSV export
Source: Sales VP request
Evidence: None recorded
2) Saved custom reports
Hypothesis: cut weekly reporting time by 30%
Evidence: anecdotal from 1 power user
3) Mobile view for dashboard
Source: Support tickets (12 last month)
Evidence: ticket IDs linked
Based on the exhibit, what is the best next action to validate customer needs before committing significant delivery effort?
Best answer: D
What this tests: Delivery
Explanation: The backlog shows multiple potential needs but limited validated evidence, especially for the highest-effort request. The most effective way to avoid building on assumptions is to run a small, timeboxed discovery activity with real, representative customers and clear learning goals, then adjust backlog ordering and scope based on what you learn.
Validated learning comes from direct customer discovery, not from who requested a feature or how confident the team feels. In the exhibit, “CSV export” is a high-influence request with no evidence, and “saved custom reports” has only a single anecdote—both are classic signals to test assumptions before committing iteration capacity. A lightweight experiment (interviews, workflow walkthroughs, clickable prototype, or concierge test) should define what you want to learn and what result would indicate real value, then feed that learning back into backlog refinement and prioritization.
A good next step is to:
Reducing technical uncertainty can help later, but it does not validate customer need.
The exhibit shows requests with weak or uneven evidence, so a timeboxed experiment with real users is the fastest way to validate needs before building.
Use the PMI-ACP Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PMI-ACP guide on PMExams.com for concept review, then return here for PM Mastery practice.