Browse Certification Practice Tests by Exam Family

PMI-ACP Practice Test

Practice PMI-ACP with free sample questions, timed mock exams, and detailed explanations in PM Mastery.

PMI-ACP tests whether you can apply agile thinking in real delivery situations, not just recite Scrum vocabulary. If you are searching for PMI-ACP sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same PM Mastery account.

Interactive Practice Center

Start a practice session for PMI Agile Certified Practitioner (PMI-ACP) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same PM Mastery account you use on web. The same PM Mastery subscription works across web and mobile.

Free diagnostic: Try the 120-question PMI-ACP full-length practice exam before subscribing.

What this PMI-ACP practice page gives you

  • A direct route into PM Mastery practice for PMI-ACP.
  • Topic drills, mixed sets, and timed practice across mindset, leadership, product, and delivery.
  • Detailed explanations that show why the strongest agile answer is best.
  • A clear free-preview path before you subscribe.
  • the same PM Mastery account across web and mobile

PMI-ACP exam snapshot

Official source check: Last checked May 5, 2026 against PMI's public PMI-ACP certification page.

PMI's public page lists 120 questions and 180 minutes for PMI-ACP. Confirm current appointment rules, eligibility, and exam-language availability directly with PMI before booking.

  • Vendor: PMI
  • Official exam name: PMI Agile Certified Practitioner (PMI-ACP)
  • Exam code: PMI-ACP
  • Items: 120 total
  • Exam time: 180 minutes
  • Assessment style: scenario-heavy agile decision making

PMI-ACP usually rewards the option that gets faster feedback, protects flow, keeps work visible, and improves value delivery without adding unnecessary process friction.

Topic coverage for PMI-ACP practice

DomainWeightTarget scored items
Mindset28%28
Leadership25%25
Product19%19
Delivery28%28

PMI-ACP decision filters for scenario questions

Use these filters when two agile answers both sound reasonable. PMI-ACP usually rewards the choice that improves feedback, flow, transparency, empowerment, and value delivery without adding unnecessary process weight.

Scenario signalFirst checkStrong answer usually…Weak answer usually…
Stakeholders disagree about directionWhat feedback, product goal, or value signal is missing?Makes the work visible, validates assumptions, and uses feedback to reorder or refine.Escalates immediately or locks scope without learning.
The team is blocked or overloadedWhat is limiting flow: WIP, dependency, unclear priority, defect load, or interruption?Removes the impediment, limits WIP, clarifies priority, and protects sustainable pace.Adds status meetings or asks people to work harder.
Quality is slippingWhat feedback loop failed: acceptance criteria, testing, review, definition of done, or integration?Improves the system of work and makes quality visible earlier.Blames individuals or adds a late inspection gate only.
A leader wants more controlWhat outcome are they trying to protect?Coaches toward transparency, working agreements, decision rights, and empirical evidence.Replaces team ownership with command-and-control task assignment.
A product idea looks attractiveWhat evidence proves customer value and priority?Uses experiments, backlog refinement, slicing, and stakeholder feedback to test value.Builds the full feature because a senior stakeholder requested it.

PMI-ACP readiness map

Use this map after each practice run. The fastest improvement usually comes from naming the decision pattern behind a miss, not rereading the same agile terms.

DomainWhat the exam is really testingWhat PM Mastery practice should force you to decideCommon wrong-answer trap
MindsetWhether you can apply agile values under pressureHow to preserve feedback, collaboration, transparency, adaptation, and customer valueChoosing a process-heavy answer because it sounds controlled
LeadershipWhether you can serve the team and influence the systemWhen to coach, remove impediments, protect psychological safety, or enable self-organizationSolving the problem for the team instead of improving team capability
ProductWhether backlog and value decisions are evidence-drivenHow to slice, prioritize, validate, and refine based on stakeholder and customer learningTreating all stakeholder requests as equal requirements
DeliveryWhether the team can keep work flowing predictablyHow to use WIP, boards, reviews, tests, retrospectives, and metrics to improve deliveryOptimizing local utilization while flow and quality degrade

How to use the PMI-ACP simulator efficiently

  1. Start with one domain at a time and run a focused drill.
  2. Review every miss until you can explain the agile principle behind the best answer.
  3. Move into mixed sets once you can switch comfortably between team, product, and delivery choices.
  4. Finish with longer timed runs to rehearse pace and judgment across all four domains.

Final 7-day PMI-ACP practice sequence

WindowWhat to doWhat not to do
Days 7-5Complete a mixed timed set or the full-length diagnostic, then classify misses by mindset, leadership, product, delivery, or timing.Do not only count the score; write the agile principle or feedback loop you missed.
Days 4-3Drill the weakest domains, especially leadership and delivery if misses involve team dynamics, flow, quality, or impediments.Do not reread generic agile vocabulary if the real weakness is choosing the best next action.
Days 2-1Review recurring traps: command-and-control, late quality gates, stakeholder wish lists, overloaded WIP, and process for its own sake.Do not start a large new run if fatigue will make scenario reading sloppy.
Exam dayIdentify who needs feedback, what value is at risk, and what system constraint blocks progress before choosing.Do not choose the answer that merely adds reporting, meetings, or approvals.

When PMI-ACP practice is enough

The goal is not to memorize every agile scenario. The goal is to build transferable decision judgment so a new prompt still leads you toward feedback, flow, transparency, team ownership, and value.

If you can complete several varied timed attempts at 75% or higher, explain why your missed answers violated an agile principle or weakened delivery flow, and consistently avoid command-and-control or process-heavy distractors, it is usually time to sit the exam rather than repeating questions you already recognize.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the question style and explanation depth.
  • Premium: the full ACP practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .

24 ACP sample questions with detailed explanations

These are original PM Mastery practice questions aligned to PMI-ACP agile mindset, leadership, product, delivery, and continuous-improvement decisions. They are not PMI exam items, are not copied from any exam sponsor, and should be used to practice agile decision patterns rather than memorize exact wording. Use them to check your readiness here, then continue in PM Mastery with mixed sets, topic drills, and timed mocks.

Question 1

Topic: Domain 2: Leadership

A delivery team has been operating in a command-and-control environment where a functional manager assigns tasks and publicly calls out individuals when defects occur. You coached the manager to stop individual blame, and you helped the team establish working agreements and run blameless retrospectives focused on system improvements.

Which evidence best validates that these undermining behaviors are decreasing?

  • A. More defects are found during UAT than last release
  • B. Anonymous psychological safety/autonomy pulse scores trend upward
  • C. Iteration velocity is increasing for three iterations
  • D. Daily standup attendance is above 95%

Best answer: B

Explanation: To validate progress in addressing blame and command-and-control, use evidence that reflects team safety and empowerment, not output volume or meeting activity. A short, anonymous pulse on psychological safety and autonomy shows whether people feel safe to speak up, learn from failures, and self-manage-key outcomes of the actions taken.

When reducing command-and-control and blame, the most meaningful validation is whether team members feel safe to raise issues, take ownership, and make local decisions without fear of punishment. Those behaviors enable real agility: transparency, experimentation, and continuous improvement.

An anonymous psychological safety/autonomy pulse (tracked over several iterations) provides direct evidence of the desired cultural shift because it measures the team’s lived experience of safety and empowerment, which are prerequisites for blameless learning and self-management. The key takeaway is to prefer outcome-oriented evidence tied to mindset and behaviors over activity measures or delivery volume.


Question 2

Topic: Domain 4: Delivery

A product team is split across three time zones and relies on an online Kanban board for coordination. People report surprises during handoffs and can’t tell what is truly in progress.

Exhibit: Board snapshot (today)

Columns: To Do | In Progress (WIP 3) | Review | Done
In Progress cards (5):
- ST-18 "API auth" (owner: ?) last update: 6 days
- ST-22 "UI error states" (owner: Mei) last update: 5 days
- ST-25 "Data mapping" (owner: ?) last update: 4 days
- ST-27 "Email templates" (owner: Sam) last update: 2 days
- ST-29 "Logging" (owner: ?) last update: 7 days

What is the best next action to improve shared visibility for this distributed team?

  • A. Ask the PMO to publish a weekly consolidated status report from the board data.
  • B. Add a daily 60-minute video status meeting to verbally confirm progress and blockers.
  • C. Switch to a new work management tool with more automation and dashboards.
  • D. Create a working agreement for board hygiene: every in-progress card has an owner and is updated daily (including blockers).

Best answer: D

Explanation: The exhibit shows the team’s primary visibility tool is stale and ambiguous (missing owners, old updates, and WIP exceeded). For a distributed team, the fastest way to restore shared understanding is to establish and follow lightweight, explicit habits that keep the board reflecting reality. A simple working agreement for ownership and daily updates enables reliable asynchronous handoffs.

Distributed teams depend on an accurate, shared “source of truth” to coordinate asynchronously. The exhibit indicates the board is not functioning as an information radiator: multiple cards lack owners, updates are several days old, and the in-progress column exceeds its WIP limit-so teammates cannot confidently infer current status or impediments.

A lightweight next step is to agree on and practice explicit board-use policies (a working agreement), such as:

  • Every in-progress item has a single accountable owner
  • Update card status at least once per day (or at end of local day)
  • Make blockers visible on the card so others can act

This improves shared visibility without adding heavy process or tooling; additional meetings may help later but don’t fix the underlying transparency problem.


Question 3

Topic: Domain 1: Mindset

A cross-functional team recently had a production incident. In the follow-up meeting, several team members were publicly blamed, and now the team avoids proposing new ideas. They only pick “safe” backlog items and reject experiments, even when outcomes are uncertain.

As the agile practitioner, what should you do to address fear of failure so experimentation and learning can resume? (Select TWO)

  • A. Facilitate a blameless retrospective and agree on “no blame, focus on learning” working agreements
  • B. Increase individual performance tracking to discourage mistakes
  • C. Introduce small, timeboxed safe-to-fail experiments with clear hypotheses and learning reviews
  • D. Move experimentation to a separate group so delivery work stays “risk-free”
  • E. Escalate to leadership to mandate that the team must innovate more
  • F. Require detailed upfront analysis and sign-off before trying any new approach

Best answers: A, C

Explanation: Fear of failure blocks experimentation because people optimize for self-protection rather than learning. A blameless retrospective with explicit working agreements restores psychological safety by shifting the conversation from who to blame to what to improve. Then, using small, timeboxed safe-to-fail experiments makes learning low-risk and repeatable.

To keep experimentation and learning possible, the team needs psychological safety plus a delivery approach that reduces the perceived personal and product risk of trying something new. A blameless retrospective sets a norm that problems are investigated as system and process opportunities, not personal faults, and working agreements make that norm explicit and enforceable. Safe-to-fail experiments (small scope, timeboxed, with a clear hypothesis and a quick learning review) reduce the cost of being wrong and reframe “failure” as validated learning.

Practical guardrails that help:

  • Timebox experiments (e.g., a short spike) and limit scope
  • Define a hypothesis and what you expect to learn
  • Review results and decide: adopt, adapt, or stop

The goal is to make learning the expected output, not to eliminate all uncertainty.


Question 4

Topic: Domain 4: Delivery

A product team ships a small increment each iteration. In the last sprint review, stakeholders said the new reporting screen was “not what we expected,” even though the team believes the story is complete. The story had a short description but no written acceptance criteria, and the team’s Definition of Done only lists technical steps (build, code review, deploy).

What is the best next step to reduce ambiguity and anchor feedback for upcoming work?

  • A. Update the Definition of Done and add acceptance criteria templates, then refine top backlog items with stakeholders
  • B. Escalate to the sponsor to approve a detailed requirements document for future stories
  • C. Add more developers to speed up delivery so changes can be made faster
  • D. Start the next sprint and ask stakeholders to provide feedback during the sprint review

Best answer: A

Explanation: The fastest way to reduce recurring “not what we expected” feedback is to make expectations explicit before implementation. Acceptance criteria clarify what will be accepted for each item, and a Definition of Done clarifies what “complete” means across the team. Using both during backlog refinement anchors stakeholder feedback to agreed, testable conditions.

When stakeholders react late with “not what we expected,” the gap is usually shared understanding, not delivery speed. Acceptance criteria make each backlog item’s outcomes and boundaries testable, and the Definition of Done sets the team-wide quality/completion baseline. The best next step is to tighten both and immediately apply them in refinement with stakeholders so upcoming stories have clear, reviewable conditions before the team builds.

A practical sequence is:

  • Expand DoD to include product-quality checks (not just technical steps)
  • Use a lightweight acceptance-criteria pattern (examples, rules, or tests)
  • Refine the next items with stakeholders to confirm the criteria

This prevents rework by enabling earlier inspection and feedback against agreed criteria, rather than debating expectations after the fact.


Question 5

Topic: Domain 4: Delivery

A product team received repeated stakeholder feedback that too many defects are being found during UAT, delaying approvals. In response, the team added automated regression tests and updated their Definition of Done to require tests and peer review before a story can be marked complete.

Which metric or evidence best validates that this process improvement is addressing the stakeholder feedback?

  • A. Number of new automated tests created each iteration
  • B. Trend of defects found by stakeholders in UAT per iteration
  • C. Increase in team velocity over the next three iterations
  • D. Percentage of planned stories finished each iteration

Best answer: B

Explanation: Because the problem came from stakeholder feedback about defects during UAT, the strongest validation is evidence that fewer stakeholder-found (escaped) defects are occurring over time. A trend tied to UAT findings shows whether the process change improved quality in the way stakeholders experience it. This creates a clear feedback loop from stakeholder pain to measurable outcome.

To use stakeholder feedback to drive continuous improvement, validate the change with an outcome measure that reflects the stakeholder’s experience. Here, stakeholders complained about defects discovered in UAT, so the most meaningful evidence is a decreasing trend in defects they find during UAT (often called escaped defects). That metric connects the improvement (stronger DoD and test automation) to the stakeholder-relevant result: smoother UAT and faster approvals.

Prefer evidence that is:

  • Outcome-based (stakeholder impact)
  • Trendable over multiple iterations
  • Hard to “game” with activity counting

Activity outputs (like test count) can support the change, but they don’t prove the stakeholder problem is improving.


Question 6

Topic: Domain 4: Delivery

Mid-sprint, you notice growing cycle time. During the daily stand-up you check the team’s Kanban board.

Exhibit: Board snapshot

WIP limits: Dev=4 Test=2 Deploy=1
Dev: 4 items (0 blocked)
Test: 3 items (2 blocked: "Test env down")
Deploy: 0 items
Note: blocked in Test for 2 days

What is the best next action to identify and address the emerging risk/impediment early?

  • A. Facilitate an immediate swarm to restore/replace the test environment and track the blocker until cleared
  • B. Move the blocked Test items back to Dev and mark them “in progress”
  • C. Ask developers to start new stories so everyone stays fully utilized
  • D. Increase the Test WIP limit to 4 to keep work moving

Best answer: A

Explanation: The visual signal is clear: Test is over its WIP limit and two items have been blocked for multiple days, which is an impediment and a delivery risk. The best response is to use the team routine (stand-up) to focus on the blockage, swarm on removing it, and keep it visible until it is resolved.

Visual management is meant to surface flow problems early so the team can respond before they become missed-sprint or release risks. Here, the Test column exceeds its WIP limit and has multiple items blocked for two days due to an environment outage-this is both an impediment and a growing risk (queues, increased cycle time, delayed feedback). The best next action is to trigger a focused conversation in the stand-up and immediately swarm to remove the impediment (or escalate to the right owner) while keeping the blocker visible and tracked.

Key moves are:

  • Stop starting new work; protect flow.
  • Make the blocker explicit (who owns it, next step, ETA).
  • Swarm/escalate until the environment is restored.

Changing labels or limits hides the risk instead of identifying and resolving it.


Question 7

Topic: Domain 4: Delivery

An agile team is building an “automated claim approval” capability. The rules are new and must be validated with the compliance officer and a few claims processors. The team uses 2-week iterations, has a WIP limit of 3 items, and their Definition of Done requires automated tests and audit logging for any releasable increment. The epic is forecast at 6 weeks.

As the agile practitioner, what is the BEST next action to reduce risk and increase learning?

  • A. Hold development until compliance signs off on all approval rules in detail
  • B. Release a fast prototype without tests or audit logging to maximize feedback speed
  • C. Slice the epic into a thin end-to-end increment, complete it to DoD, and get compliance/user feedback
  • D. Start the UI and rule engine as separate parallel workstreams to keep everyone busy

Best answer: C

Explanation: Delivering a small, end-to-end increment is the fastest way to validate uncertain approval rules with compliance and users while keeping the quality bar intact. A thin vertical slice provides usable feedback in days/weeks instead of waiting for the full epic. It also respects the team’s flow policies by focusing on finishing work, not starting more.

The core delivery practice here is incremental delivery: reduce risk and increase learning by shipping or demonstrating small, releasable slices that produce real feedback. With uncertain business rules and compliance constraints, the team should avoid building large batches or component-only work that delays validation.

A good next step is to:

  • Split the epic into a vertical slice (e.g., one claim type, one rule path)
  • Build it to the existing Definition of Done (tests + audit logging)
  • Demo or pilot it with the compliance officer and a few processors
  • Use what you learn to reorder and refine the next slices

This approach accelerates feedback without trading away quality or violating WIP/flow agreements.


Question 8

Topic: Domain 4: Delivery

An agile team reviews flow metrics to improve predictability.

Exhibit: Last 4 weeks (averages)

Lead time (Requested Done): 18 days
Cycle time (In Progress Done): 6 days
Time in Requested/Ready before start: 11 days
WIP now: Ready=14, In Progress=5, Test=2

Based on the exhibit, what is the best improvement action to try next?

  • A. Limit the Ready queue and tighten pull/replenishment
  • B. Re-estimate stories to improve lead time accuracy
  • C. Increase iteration length to absorb demand variability
  • D. Add more developers to reduce cycle time

Best answer: A

Explanation: Cycle time is relatively low at 6 days, but lead time is 18 days because items spend 11 days waiting before work starts. That points to a queueing/pull problem rather than an execution-speed problem. The best next experiment is to reduce waiting by limiting the Ready inventory and replenishing based on capacity.

Lead time includes both waiting and working time, while cycle time focuses on the time from start to finish. Here, cycle time (6 days) is much smaller than lead time (18 days), and the exhibit explicitly shows 11 days spent in Requested/Ready before work starts. That indicates the primary improvement opportunity is reducing queue time and improving flow into “In Progress,” not speeding up development.

A practical experiment is to:

  • Set/adjust a WIP limit for “Ready” (or remove “Ready” as a large buffer)
  • Replenish based on actual capacity (pull), not forecasted demand
  • Clarify an entry policy so only truly ready items enter the queue

The key takeaway is to target the largest component of lead time: pre-start waiting caused by excess inventory.


Question 9

Topic: Domain 4: Delivery

A Kanban team runs retrospectives every two weeks, but most action items are forgotten once urgent work arrives. Cycle time variability is increasing, and leadership wants to see measurable improvement within a month. The team cannot add new recurring meetings and wants to keep changes lightweight.

What should the agile practitioner recommend to best build a culture of relentless improvement?

  • A. Add more detailed status reporting to increase accountability
  • B. Launch a one-time process redesign workshop to fix root causes
  • C. Create an improvement backlog, reserve small capacity, and run experiments
  • D. Ask management to mandate adherence to a standard workflow

Best answer: C

Explanation: Relentless improvement comes from repeating small, measurable change cycles rather than relying on one-off discussions. By treating improvements as first-class work (visible on a backlog/board), funding them with explicit capacity, and running timeboxed experiments, the team can sustain learning without adding meetings. This also makes progress measurable within the month.

The core concept is establishing a repeatable improvement loop (e.g., PDCA/inspect-and-adapt) that survives day-to-day urgency. When retro action items stay “out of band,” they are the first to be dropped; making them part of the normal flow increases follow-through.

A practical loop that fits the constraints is:

  • Capture improvements as backlog items with clear success measures
  • Pull one small improvement at a time with explicit capacity/WIP
  • Review results in existing cadences (retro/replenishment)
  • Keep what works, adjust or stop what doesn’t

This optimizes learning and flow while avoiding heavy upfront redesign or compliance-driven changes that reduce ownership.


Question 10

Topic: Domain 1: Mindset

A cross-functional team says they want “more open discussions,” but decisions still go unchallenged. You are facilitating the next retrospective.

Exhibit: Retrospective notes (excerpt)

Keep: "Fast delivery when we agree."
Stop: "Debating the lead architect's approach."
Try: "Bring concerns 1:1 after the meeting."
Assumption heard: "Compliance means we must finish all docs before coding."
Outcome concern: "Cycle time is growing; rework is rising."

What is the best next action supported by the exhibit?

  • A. Facilitate a structured discussion to surface assumptions, then agree on a small experiment tied to cycle time and rework
  • B. Move concerns to anonymous channels so disagreements do not create conflict in meetings
  • C. Ask the lead architect to present the “right approach” so the team can align faster
  • D. Reinforce that architectural and compliance decisions are out of scope for team discussion

Best answer: A

Explanation: The notes show people are avoiding disagreement in meetings and holding an untested compliance assumption that may be driving worse outcomes. The best response is to create a safe, respectful way to challenge assumptions in the open and focus the team on measurable outcomes. A small experiment lets the team learn without making the discussion personal.

Psychological safety enables teams to challenge assumptions and debate ideas without fear of interpersonal consequences. The exhibit signals avoidance (“don’t debate,” “take it 1:1”) and an unvalidated belief about compliance that may be increasing cycle time and rework.

A good next step is to facilitate a respectful, outcome-focused conversation that:

  • Separates people from ideas (challenge the approach, not the person)
  • Makes assumptions explicit (e.g., what compliance actually requires)
  • Chooses a small, timeboxed experiment (e.g., thin-slice documentation) with success measures (cycle time, rework)

This keeps focus on outcomes while normalizing healthy dissent in the team setting.


Question 11

Topic: Domain 1: Mindset

A department VP asks you to “increase accountability” by publishing a weekly dashboard showing story points completed per developer. The team is worried the numbers will be used to blame individuals.

What is the FIRST clarifying question you should ask before agreeing on what metrics to radiate?

  • A. Should we switch from story points to hours for accuracy?
  • B. What decision will this dashboard drive, and who will use it?
  • C. How many story points did each developer complete last sprint?
  • D. Can HR align performance reviews to the new dashboard?

Best answer: B

Explanation: Start by clarifying the purpose and users of the metric so you can select information that supports learning and better delivery decisions. When the decision and audience are explicit, you can steer toward team/system outcomes and add needed context, which reduces the risk of using metrics to judge individuals. Transparency with intent supports improvement rather than blame.

Radiating metrics is valuable when it creates shared understanding and enables improvement, but metrics get weaponized when they are used to rank people or enforce “accountability” without clear decision intent. Before choosing what to publish, first clarify what decision the metric is supposed to enable and who will consume it. With that purpose defined, you can choose appropriate team or flow measures, add context (work type, constraints, quality), and set working agreements on how the data will and won’t be used. If leadership cannot articulate the decision, any dashboard is likely to become a vanity or compliance report and increase dysfunctional behavior.

Key takeaway: align metrics to decisions and learning, not individual performance judgments.


Question 12

Topic: Domain 3: Product

A product team is asked to “improve the employee onboarding experience,” but stakeholders cannot agree on what the real pain points are. The team has little direct contact with new hires, and existing feedback is mostly anecdotal.

Which practice is the best match to reduce uncertainty before committing to a solution?

  • A. Run design thinking user research and synthesis
  • B. Timebox a technical spike to reduce architecture risk
  • C. Create a story map to sequence delivery
  • D. Build an MVP to test a value hypothesis

Best answer: A

Explanation: When the team is unsure what problem to solve, the fastest feedback comes from directly learning about users’ needs and pain points. Design thinking techniques (empathy, define, and ideation) help create a shared problem statement and success criteria before choosing features. This reduces the risk of building the wrong solution based on assumptions.

The core decision is whether uncertainty is primarily about the problem/needs or about the solution/market response. Here, stakeholders disagree on the pain points and evidence is anecdotal, so the biggest risk is misunderstanding users. Design thinking fits this stage because it uses direct user discovery and synthesis to clarify:

  • Who the users are and their context
  • What outcomes they need (jobs, pains, gains)
  • A shared problem statement to guide solution choices

Lean startup is better once you have a clear hypothesis about a solution or value proposition and need to validate it quickly with experiments. The key takeaway: use design thinking to discover and frame the right problem; use lean startup to validate solution hypotheses.


Question 13

Topic: Domain 2: Leadership

Two senior developers on an agile team repeatedly argue during planning and daily coordination about the “right” technical approach. Other team members have started taking sides, and work is stalling on a critical story needed to meet the iteration goal. As the team facilitator, what is the BEST next step to address this conflict in a timely way and prevent fragmentation and delays?

  • A. Escalate the disagreement to functional managers to decide
  • B. Defer the issue to the iteration retrospective for team discussion
  • C. Privately ask each developer to “tone it down” to reduce tension
  • D. Facilitate a focused conversation to surface interests and agree on a path

Best answer: D

Explanation: Addressing the conflict immediately with facilitated, direct dialogue prevents the team from polarizing and keeps work flowing toward the iteration goal. By surfacing underlying interests and constraints (not positions), the team can make a decision and capture agreements for how to collaborate going forward.

In agile teams, unresolved interpersonal or technical conflicts quickly become flow problems: people avoid collaboration, form sub-groups, and decisions stall. The best response is timely facilitation that brings the right people together to clarify the shared objective (the iteration goal), surface interests and constraints, and agree on a decision-making approach and next steps.

A practical facilitation sequence is:

  • Set a short, neutral working session with clear outcomes
  • Have each person state concerns and desired outcomes (listen and summarize)
  • Reframe to shared interests (customer value, quality, risk, time)
  • Agree on a decision and capture any working-agreement updates

This resolves the issue early, restores collaboration, and reduces the chance of ongoing delays versus postponing, escalating prematurely, or using indirect “keep the peace” coaching alone.


Question 14

Topic: Domain 4: Delivery

Mid-sprint, the team’s Kanban board shows a sharp increase in items in “In Progress,” and several cards are flagged as blocked. Stakeholders are asking you to “fix the WIP problem” immediately.

What should you verify or ask first before deciding what action to take?

  • A. Whether the team should switch to a different agile framework
  • B. Which specific items are blocked, for how long, and what is preventing them from moving
  • C. Whether the product owner can remove scope from the sprint goal
  • D. Whether leadership will approve adding more people to the team

Best answer: B

Explanation: Rising WIP and blocked work are transparency signals that flow is constrained somewhere. Before choosing an intervention, first make the work visible at the right level of detail by identifying which items are blocked, how long they’ve been stuck, and the immediate causes. That information directs a focused action (e.g., swarm to unblock, adjust WIP, escalate a dependency) instead of guessing.

In an agile, transparency-first mindset, you respond to flow warnings (blocked cards, rising WIP) by getting fast clarity on the constraint before changing plans. The most useful first check is to pinpoint the blocked work and the blocker details so the team can address the real impediment rather than treating symptoms.

A practical first step is to confirm:

  • Which work items are blocked and where
  • How long each has been blocked (aging)
  • The blocker type (dependency, review, environment, unclear acceptance criteria)
  • Who can remove it and the next smallest action

Once the constraint is understood, you can decide an immediate response such as swarming, escalating a dependency, tightening WIP, or clarifying acceptance criteria/definition of done. Prematurely changing scope, staffing, or frameworks skips the necessary diagnosis.


Question 15

Topic: Domain 3: Product

An agile team is building a new self-service feature for a customer. Sprint Reviews are poorly attended by customer users, and feedback usually arrives weeks later via an account manager, leading to frequent rework.

Which TWO actions should the agile practitioner recommend to improve customer engagement and feedback timeliness? (Select TWO)

  • A. Identify customer users/decision makers and set a recurring live demo-feedback cadence each iteration
  • B. Send more detailed weekly status reports to keep stakeholders informed
  • C. Run short usability/acceptance sessions on small increments and add the findings directly to the backlog
  • D. Freeze the backlog for the remaining iterations to prevent additional changes
  • E. Ask the account manager to filter and consolidate all feedback before it reaches the team
  • F. Focus on increasing velocity for two sprints before requesting any more customer time

Best answers: A, C

Explanation: The core issue is a slow, indirect feedback loop caused by low customer participation and delayed, mediated input. Improve engagement by getting direct access to the right customer users/decision makers and creating frequent opportunities to review working increments. This increases transparency, accelerates learning, and reduces avoidable rework.

Insufficient customer engagement shows up as missed/low-attendance reviews, feedback arriving late through intermediaries, and rising rework because assumptions go untested. The most effective change is to shorten and strengthen the feedback loop by engaging real customer users (or empowered decision makers) directly and frequently.

Practical changes that address the problem in the scenario are:

  • Make participation easy and explicit: identify the right attendees (users and decision makers) and establish a recurring review/feedback cadence.
  • Validate in smaller slices: use brief usability/acceptance sessions on thin increments so feedback is timely and immediately backlog-ready.

Status reporting or filtering feedback through intermediaries increases distance from the customer and does not fix the delayed-learning problem.


Question 16

Topic: Domain 2: Leadership

In a backlog refinement meeting, two senior engineers are stuck in a positional debate: “We must rewrite the service in a new framework” vs. “We must only patch the current code.” The discussion is getting personal and no one is exploring alternatives. As the agile facilitator, what is the BEST technique to shift the conversation into shared problem solving?

  • A. Run a 5 Whys to identify the technical root cause
  • B. Use fist-to-five to quickly converge on one approach
  • C. Reframe to interests and co-create a shared problem statement
  • D. Escalate the decision to the product owner to break the tie

Best answer: C

Explanation: When people argue positions, the fastest path back to collaboration is to separate solutions from needs. Facilitate an interest-focused conversation that makes each person’s underlying concerns explicit, then align the group on a shared problem statement and success criteria. Once the problem is shared, the team can generate and evaluate options together.

The core move is reframing: shift from “my solution vs. your solution” to “what are we trying to achieve and what constraints are we balancing?” Ask each person to state the need behind their preferred approach (e.g., reliability, speed to market, maintainability), then capture common goals and disagreements as a neutral problem statement the whole team owns.

Practical facilitation steps:

  • Ask: “What outcome are we optimizing for, and what trade-offs worry you?”
  • Capture interests/constraints on a visible board.
  • Draft a shared problem statement and measurable success criteria.
  • Then invite multiple options and run a lightweight evaluation.

This reduces defensiveness and creates a joint search for the best approach instead of a win/lose debate.


Question 17

Topic: Domain 2: Leadership

A cross-functional agile team is experiencing recurring integration defects, and recent retrospectives have become circular debates with no follow-through. You are facilitating a 60-minute problem-solving session with the goal of leaving with a clear, realistic action plan the team will own.

Which facilitation approach SHOULD AVOID?

  • A. Co-create a clear problem statement and success criteria
  • B. Have the tech lead decide actions and assign tasks
  • C. Use a timeboxed 5 Whys to explore likely causes
  • D. Define 1-2 actions with owners, dates, and measures

Best answer: B

Explanation: A problem-solving session that produces a usable action plan depends on shared understanding and team ownership. Having a single leader decide and assign work turns facilitation into command-and-control, which often reduces psychological safety and buy-in. The facilitator’s job is to guide the team to agree on causes and commit to specific next steps.

The core facilitation goal is to help the team converge from a messy problem into a small set of owned, testable actions. Effective facilitation creates shared clarity (what problem are we solving and how will we know it improved) and enables broad participation so the resulting plan is credible and supported. A facilitator should guide structure and flow, not make decisions for the team.

Practical elements that help the session end with a clear action plan include:

  • Align on a concise problem statement and desired outcome
  • Use a lightweight root-cause technique (timeboxed)
  • Turn the top finding into 1-2 concrete actions/experiments
  • Assign an owner, due date, and a way to check results

Having a lead unilaterally decide actions is the anti-pattern because it undermines ownership and learning.


Question 18

Topic: Domain 3: Product

A team has spent the last three iterations building an “Advanced Analytics” feature. In the latest review, pilot customers said it’s “nice to have,” only 5% used the current increment, and the sales team reports the top deal blocker is a simple “Export to CSV” capability. The team can only complete one of these in the next iteration.

As the agile practitioner supporting the product owner, what is the best next step to stop investing in low-value work and redirect effort?

  • A. Use the feedback to reorder the backlog and stop Analytics work
  • B. Run a retrospective to address why usage is low
  • C. Escalate to the sponsor to decide which feature to build
  • D. Finish Analytics to avoid wasted effort, then switch focus

Best answer: A

Explanation: Agile value delivery prioritizes outcomes over sunk costs. The team has clear, recent evidence that Analytics is currently low value and that CSV export is the highest-value next bet. The best next step is to adapt by stopping further Analytics investment and reordering the product backlog accordingly.

The core concept is maximizing value by making pivot/stop decisions based on validated learning, not on effort already spent. The review feedback and usage data are strong signals that continuing to invest in “Advanced Analytics” has low near-term return, while “Export to CSV” directly addresses a top customer need and business outcome.

A practical next-step sequence is:

  • Confirm the value signals (review feedback, adoption, sales impacts)
  • Update the product backlog ordering and iteration goal
  • Explicitly stop/de-scope further Analytics work (capture learning, keep usable parts)

This redirects capacity quickly while keeping decision-making close to the product backlog rather than delaying for completion or escalating unnecessarily.


Question 19

Topic: Domain 2: Leadership

An agile team has missed its iteration goal twice. Their workflow board shows growing work in progress and several stories bouncing back from testing due to rework.

In daily standups, two senior developers frequently talk over each other about “the right approach,” while the rest of the team stays quiet. After standup, decisions happen in private chats and two people sometimes implement competing solutions. The product owner says priorities have been stable.

What is the most likely underlying cause?

  • A. Insufficient automated testing is causing the rework
  • B. Conflict is being avoided instead of resolved promptly
  • C. The team needs stricter WIP limits to improve flow
  • D. The product owner is changing priorities too frequently

Best answer: B

Explanation: The symptoms point to interpersonal and decision-making breakdown more than a pure process or quality issue. The loud disagreement in public meetings, silence from others, and private side decisions indicate conflict avoidance and lack of timely facilitation. That fragmentation then manifests as duplicated work, increased WIP, and rework.

The core issue is unmanaged conflict: a visible technical disagreement is not being addressed in a shared, timely way, so it shifts into side channels and creates competing implementations. When conflict isn’t surfaced and resolved (e.g., with clear decision-making, facilitation, and working agreements), teams split into factions, collaboration drops, and delivery suffers through duplication, rework, and missed goals.

A timely approach would include:

  • Make the conflict explicit and timebox a facilitated conversation.
  • Establish how decisions will be made and documented.
  • Reinforce psychological safety so quieter voices participate.

Process and quality improvements can help, but they won’t fix the fragmentation if the underlying conflict remains unresolved.


Question 20

Topic: Domain 3: Product

During a backlog refinement session, the product owner proposes a story:

“As a customer support agent, I want to see a customer’s current risk score on the profile so I can prioritize my responses.”

Stakeholders add, “It should be accurate and updated in real time,” but no further details are provided. What should you ask first to help refine testable acceptance criteria?

  • A. Should we raise or lower the WIP limits to protect flow while implementing this?
  • B. Can we agree to apply the same definition of done as the last release?
  • C. What specific latency and accuracy thresholds would make “real time” and “accurate” acceptable?
  • D. What is the team’s average velocity so we can estimate delivery?

Best answer: C

Explanation: To make acceptance criteria testable, you first clarify ambiguous terms into observable, measurable conditions. “Accurate” and “real time” are undefined and can’t be verified without thresholds. Asking for specific latency and accuracy expectations enables clear criteria and shared understanding before design or estimation discussions.

Acceptance criteria should describe verifiable conditions of satisfaction. When a backlog item includes ambiguous terms (for example, “real time,” “fast,” “user-friendly,” or “accurate”), the most useful first step is to ask for measurable thresholds and boundaries so testers and stakeholders can objectively confirm whether the story is done.

For this story, clarify items such as:

  • Maximum acceptable data freshness delay (e.g., seconds/minutes)
  • What “accuracy” means (source of truth, rounding, error tolerance)
  • Any constraints or edge cases that must be covered

Once these are defined, the team can express them as testable acceptance criteria and then proceed to sizing and delivery planning with less rework.


Question 21

Topic: Domain 4: Delivery

A product team has missed the last two iteration goals. Their board shows more items “in progress” each day, and many stories return from testing with rework. Developers want time to refactor and add automated tests, but the product owner keeps prioritizing only new features because “customers can’t see technical work.” The team’s definition of done is “coded and demoed,” with testing and cleanup often deferred.

What is the most likely underlying cause?

  • A. Stakeholders are frequently changing requirements mid-iteration
  • B. The team’s estimates are too inaccurate to plan iteration scope
  • C. Quality and maintainability work is not built into prioritization and done criteria
  • D. There are no WIP limits on the board

Best answer: C

Explanation: The symptoms point to unsustainable delivery: work is started faster than it can be finished, and “done” allows deferring tests and cleanup. When maintainability is excluded from backlog decisions and the definition of done, technical debt grows and shows up as defects, rework, and missed iteration goals. The dev-PO conflict is consistent with misaligned value definition around product health.

This situation reflects a short-term feature-output bias that underfunds long-term product health. A weak definition of done (“coded and demoed”) and a prioritization approach that rejects refactoring/testing create a system where hidden work accumulates as technical debt and returns later as defects and rework. That rework increases unfinished work on the board, reduces throughput, and makes iteration goals unreliable.

To balance near-term delivery with maintainability, teams typically:

  • Make quality non-negotiable via a shared definition of done
  • Represent enablers/tech debt explicitly in the backlog
  • Allocate capacity intentionally for sustainability work

Flow controls or better estimation can help, but they won’t resolve the root cause if the product is allowed to ship with deferred quality work.


Question 22

Topic: Domain 4: Delivery

Mid-sprint, a stakeholder reviews the latest increment and says, “This isn’t what we wanted-redo it.” The team’s story is marked “Done” on the board, but the feedback is vague and opinions differ about what “done” means.

What should the agile practitioner ask or verify FIRST to anchor the feedback and reduce ambiguity?

  • A. Should the stakeholder join the next retrospective to discuss expectations?
  • B. Can the team commit to finishing the rework in the current sprint?
  • C. Who specifically approved this story before it was started?
  • D. Which acceptance criteria and Definition of Done were agreed for this story?

Best answer: D

Explanation: The fastest way to turn subjective feedback into actionable decisions is to compare the increment to the team’s explicit, pre-agreed expectations. Verifying the story’s acceptance criteria and the team’s Definition of Done establishes whether the work actually met “done” and what gap (if any) exists. That shared baseline reduces debate and prevents rework driven by shifting interpretation.

When a stakeholder says “not what we wanted,” the immediate risk is ambiguous feedback leading to unplanned churn. In agile delivery, acceptance criteria define what value and behavior the story must satisfy, and the Definition of Done defines the quality/completeness bar for calling work “done.” Verifying what was agreed (and whether it was met) creates a clear reference point to:

  • distinguish a missed requirement from a new request
  • decide whether the item is actually complete or should be reopened
  • capture any new expectations as additional backlog work

Only after anchoring the conversation in acceptance criteria and DoD should you discuss schedule trade-offs or process improvements. The key takeaway is to make feedback objective and testable before negotiating scope, timing, or commitments.


Question 23

Topic: Domain 1: Mindset

Midway through an iteration, a team demos a thin slice of a new workflow to users and learns that a key assumption is wrong. The product owner wants to immediately swap in new high-value stories, but the team worries this will obscure the iteration goal and cause thrashing.

Which agile principle or practice best fits how the team should respond while keeping goals clear?

  • A. Add formal change control approvals before adjusting scope or priority
  • B. Increase individual utilization by assigning more parallel work to stay busy
  • C. Protect the sprint plan by deferring any changes until the next iteration
  • D. Revisit the iteration goal and re-order the backlog using the new learning

Best answer: D

Explanation: The situation calls for responding to feedback without losing alignment. Updating priorities using the new information is appropriate, but it should be done by reaffirming or adjusting the iteration goal so the team has a stable north star. This balances adaptability with clarity and reduces churn.

Agile teams embrace change by using fast feedback to adapt plans and priorities, but they do so in a way that preserves transparency and alignment. When new learning invalidates assumptions, the right response is to re-check the goal (the “why” for the near term) and then re-order work to maximize value against that goal. If the goal truly changes, make that explicit and re-plan accordingly; if it stands, use it as a filter to decide what to swap in or defer. The key is intentional re-planning based on evidence, not rigidly freezing a plan or constantly churning without a stable objective.

A good heuristic is: validate learning confirm/adjust goal re-prioritize communicate impact.


Question 24

Topic: Domain 3: Product

A platform team handles a steady stream of small enhancements and production support requests. Work items vary in size, priorities can change daily, and the team does not plan in fixed-length iterations. Stakeholders want a simple way to validate delivery progress and quickly see where work is getting stuck.

Which visualization artifact best fits this workflow and validates progress?

  • A. Percent of tasks completed per person each week
  • B. Iteration task board showing tasks for the current sprint
  • C. Gantt chart showing planned start and finish dates
  • D. Kanban board with WIP limits and explicit workflow states

Best answer: D

Explanation: For continuous, unplanned incoming work, progress is best validated by visualizing flow through defined states and making constraints explicit. A Kanban board with WIP limits shows what is in progress, what is blocked, and where queues are forming, enabling fast conversations about stuck work. This directly supports daily reprioritization without relying on timeboxed plans.

The core decision is whether the team’s workflow is timeboxed (iteration-based) or continuous flow. When priorities shift daily and work arrives unpredictably, a pull-based flow system is typically the best fit. A Kanban board makes progress visible by showing items moving through explicit states (for example, Ready, In Progress, Review, Done) and by using WIP limits to surface bottlenecks early. This provides strong evidence of progress and readiness because the board highlights queues, blocked work, and throughput over time without requiring a fixed sprint commitment. The closest alternative is an iteration task board, which is better evidence when work is planned and tracked inside a timebox.

PMI-ACP agile delivery map

Use this map after the sample questions to connect individual items to agile mindset, value delivery, stakeholder collaboration, team performance, adaptive planning, problem detection, and continuous improvement.

    flowchart LR
	  S1["Agile project or team scenario"] --> S2
	  S2["Clarify value user and team context"] --> S3
	  S3["Inspect backlog flow risk and feedback"] --> S4
	  S4["Choose facilitation coaching or adaptation"] --> S5
	  S5["Deliver increment and learn"] --> S6
	  S6["Improve team system and outcomes"]

Quick Cheat Sheet

CueWhat to remember
MindsetFavor transparency, inspection, adaptation, collaboration, and incremental value.
BacklogRefinement, ordering, slicing, and acceptance criteria guide delivery.
Team performanceServant leadership, facilitation, impediment removal, and working agreements matter.
FeedbackReviews, retrospectives, metrics, and user input guide adaptation.
ValueAgile answers prioritize validated value over completing low-value work.

Mini Glossary

  • Agile mindset: Adaptive approach emphasizing value, feedback, collaboration, and continuous learning.
  • Backlog: Ordered list of work, outcomes, or requirements needing refinement and delivery.
  • Increment: Usable product or solution result created through completed work.
  • Servant leadership: Leadership style focused on enabling teams, removing impediments, and developing capability.
  • Flow: Movement of value through a system with attention to bottlenecks, WIP, and feedback.

Focused sample questions

Use these child pages when you want focused PM Mastery practice before returning to mixed sets and timed mocks.

In this section

  • PMI-ACP: Mindset
    Try 10 focused PMI-ACP questions on Mindset, with answers and explanations, then continue with PM Mastery.
  • PMI-ACP: Leadership
    Try 10 focused PMI-ACP questions on Leadership, with answers and explanations, then continue with PM Mastery.
  • PMI-ACP: Product
    Try 10 focused PMI-ACP questions on Product, with answers and explanations, then continue with PM Mastery.
  • PMI-ACP: Delivery
    Try 10 focused PMI-ACP questions on Delivery, with answers and explanations, then continue with PM Mastery.
  • Free PMI-ACP Full-Length Practice Exam: 120 Questions
    Try 120 free PMI-ACP questions across the exam domains, with answers and explanations, then continue in PM Mastery.
Revised on Friday, May 15, 2026