Browse Certification Practice Tests by Exam Family

PMI-ACP: Delivery

Try 10 focused PMI-ACP questions on Delivery, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePMI-ACP
Topic areaDelivery
Blueprint weight28%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Delivery for PMI-ACP. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 28% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Delivery

An agile team is building a new customer portal for paying clients (external customers). Internal teams—Compliance and the Support Center—must approve workflows and will operate the portal.

Lately the team is missing iteration goals and rework is increasing. WIP is growing in the “Ready for approval” column even though development WIP limits are enforced. Sprint reviews include external beta users who give positive feedback, but Compliance often responds days later via email with major changes, and conflict between the team and Compliance is escalating.

What is the most likely underlying cause?

  • A. Lack of a clear definition of done
  • B. One-size-fits-all customer engagement approach
  • C. Inadequate development WIP limits
  • D. Poor estimation causing unrealistic iteration commitments

Best answer: B

What this tests: Delivery

Explanation: The clues point to late internal feedback and approvals as the driver of rework and growing queue time. Internal customers who approve and operate the solution typically need a tighter collaboration model (shared cadence, quick decision path, early review of constraints) than external customers providing market feedback. Using the same engagement approach for both groups leads to delayed discovery of nonfunctional and regulatory needs.

This is a customer engagement mismatch, not primarily a team execution problem. External customers are validating value and usability in sprint reviews, and they are satisfied—so the product discovery loop for externals is working. The bottleneck and rework are coming from internal customers (Compliance and Support) who act as approvers and operators; they need earlier, higher-frequency collaboration and clear decision/response expectations so constraints are surfaced before work is built.

A better internal-customer approach typically includes:

  • An identified internal representative empowered to decide (or a defined decision path)
  • Regular touchpoints aligned to the team cadence (not ad-hoc email)
  • Early review of policies, workflow impacts, and operational readiness

Key takeaway: segment customers (internal vs. external) and design fit-for-purpose feedback loops for each.

The team is using an external-style feedback loop and not collaborating with internal approvers/operators early and continuously, causing late constraint discovery and rework.


Question 2

Topic: Delivery

A product director asks your agile team to “increase velocity by 20%” and starts publishing a leaderboard comparing teams by story points completed each iteration. Within two iterations, the team begins splitting stories smaller and inflating estimates; throughput is flat, WIP is rising, and escaped defects increased.

What is the BEST next action to correct this metric anti-pattern?

  • A. Re-baseline story point estimates to restore accurate velocity
  • B. Set individual point targets to increase accountability
  • C. Stop estimating entirely and measure hours spent per story
  • D. Align on metric purpose and replace with flow and quality metrics

Best answer: D

What this tests: Delivery

Explanation: The anti-pattern is using velocity as a performance and comparison metric, which predictably drives gaming and hides real delivery signals. The best correction is to realign stakeholders on why metrics exist and switch to a balanced set that supports forecasting and improvement—such as flow (cycle time/throughput/WIP) plus quality indicators. This preserves transparency and reduces incentives to manipulate estimates.

Velocity (story points per iteration) is a team-local planning aid, not a KPI for comparing teams or judging performance. When leaders use it as a target, teams often respond by changing the points rather than improving delivery (splitting stories to look “faster,” inflating estimates), while real outcomes degrade (higher WIP, more escaped defects, flat throughput).

Best next action is to reset the conversation about the metric’s purpose and replace/augment it with metrics that are harder to game and directly reflect delivery health:

  • Flow: WIP, cycle time, throughput
  • Quality: escaped defects, defect trends, automation/DoD adherence

Keep any remaining use of velocity strictly for the team’s own forecasting and capacity discussions, not external comparison.

It removes the incentive to game points by shifting to decision-ready measures (e.g., cycle time, throughput, defects) aligned to delivery outcomes.


Question 3

Topic: Delivery

A product team’s key customer attends reviews and sends frequent “small improvements” during the iteration. The team keeps swapping in these requests immediately, and defects have increased while planned stories regularly slip. The customer is now dissatisfied with quality but continues to push for more changes.

What is the best next step to keep collaboration focused on value and quality rather than scope churn?

  • A. Align with the product owner and customer on goals, quality criteria, and a rule to funnel new requests into the backlog for prioritization
  • B. Ask the product owner to pause customer participation until defect rates drop
  • C. Accept the customer’s mid-iteration requests to demonstrate responsiveness
  • D. Escalate to a governance body to approve any change before work starts

Best answer: A

What this tests: Delivery

Explanation: Effective customer collaboration uses frequent feedback to improve outcomes, while protecting flow and quality. The next step is to co-create clear goals and quality expectations (e.g., acceptance criteria/Definition of Done) and agree that new requests are captured and prioritized through the backlog, not swapped in ad hoc. This keeps change responsive but intentional and value-driven.

The core issue is not “too much customer involvement,” but unmanaged feedback that bypasses prioritization and quality safeguards. A healthy next step is to collaborate with the product owner and customer to (1) restate the product/iteration goal, (2) make quality explicit through acceptance criteria and a Definition of Done, and (3) agree on a simple decision path: new ideas go to the backlog and are ordered during refinement/planning based on value and risk.

This preserves customer collaboration while reducing context switching, unplanned work, and rushed testing that drive defects. The team can still respond quickly—by re-prioritizing intentionally at the right cadence rather than continuously reshuffling in-flight work.

It preserves fast feedback while creating shared outcomes and quality expectations, and it channels new ideas into value-based prioritization instead of mid-iteration thrash.


Question 4

Topic: Delivery

A product team uses a Kanban board and tracks flow metrics weekly. Over the last month, their average lead time increased from 8 days to 16 days, but average cycle time stayed roughly 6 days. During the same period, WIP on the board steadily grew, several items were reopened for rework after late clarification, and daily stand-ups often turn into arguments about what to start next.

Which underlying cause is most likely driving these signals?

  • A. The team lacks the technical skills to test and integrate changes efficiently
  • B. Work is being started without effective WIP limits and clear pull/prioritization policies
  • C. Stakeholders are providing feedback too late in the release cycle
  • D. Developers are taking longer to complete work once they start it

Best answer: B

What this tests: Delivery

Explanation: Lead time doubling while cycle time remains stable indicates items are spending more time waiting before (or between) active work, not that execution time is worsening. The growing WIP and frequent conflict about what to start next are classic signs of pushing too much work into the system without explicit pull rules. Adding WIP limits and clear replenishment/prioritization policies reduces queues, improves focus, and lowers lead time.

Cycle time measures how long work takes once started; lead time includes both working time and waiting time. In the scenario, cycle time stays about the same while lead time grows, which strongly suggests that waiting/queue time is increasing. The additional clues—WIP steadily rising and repeated arguments about what to start—indicate the system is allowing too much work to be started and is missing explicit pull policies (including WIP limits and a clear replenishment/prioritization approach). That drives longer queues (higher lead time), context switching, and late rework when items are started before they are truly ready. The most direct improvement opportunity is to limit WIP and make pull/prioritization policies explicit so work starts only when capacity and clarity exist.

Rising lead time with stable cycle time and growing WIP points to increased waiting/queue time from over-starting and unclear pull policies.


Question 5

Topic: Delivery

A team releases a thin slice of a new “quick reorder” feature to 30 pilot customers to test the hypothesis that it will reduce time-to-reorder. After two weeks, adoption is low and interviews show customers mostly want “reorder from past invoices” instead. The next iteration is planned to add more UI polish to quick reorder.

What is the best agile practice to apply next?

  • A. Lower WIP limits to increase flow and finish the planned quick reorder items faster
  • B. Tighten the definition of done to ensure higher quality before expanding the feature
  • C. Run a retrospective to improve the team’s collaboration and estimation approach
  • D. Hold a pivot/persevere review using the pilot results, then re-prioritize and re-scope the product backlog

Best answer: D

What this tests: Delivery

Explanation: The pilot data and interviews invalidate the current value hypothesis, so the team should decide whether to pivot or persevere based on validated learning. The next step is to translate that decision into an updated backlog order and scope so investment follows customer value signals rather than the original plan.

Early feedback is most useful when it is compared to a clear hypothesis about customer value and then used to make an explicit decision: continue, adjust priorities, or pivot. Here, low adoption plus consistent interview signals indicate the current slice is not solving the primary customer problem, so continuing with “more of the same” work (UI polish) risks waste. A short pivot/persevere conversation with the product owner and key stakeholders should convert the learning into concrete delivery choices (drop, re-scope, or reorder backlog items) and update the near-term plan accordingly. The key takeaway is to let outcome-focused evidence change priorities before adding more scope to a weak signal.

Validated learning from early feedback should drive a pivot or re-scope and an updated backlog order before investing further.


Question 6

Topic: Delivery

A Kanban team’s average cycle time has increased over the last month. In the retrospective, they identify waste from multitasking and long waits for code reviews. They quickly estimate impact vs. effort for several improvement ideas.

Exhibit: Improvement options (team estimate)

1) Add WIP limits + explicit pull policy for reviews: Impact HIGH, Effort LOW
2) Build CI/CD pipeline + automated regression suite: Impact HIGH, Effort HIGH
3) Add 2 more developers to the team: Impact MED, Effort MED
4) Timebox daily standup to 10 minutes: Impact LOW, Effort LOW

Which action should the team do first?

  • A. Timebox the daily standup to 10 minutes
  • B. Request two additional developers be assigned immediately
  • C. Add WIP limits and an explicit pull policy for code reviews
  • D. Build a CI/CD pipeline and automated regression suite

Best answer: C

What this tests: Delivery

Explanation: When prioritizing waste reduction, start with changes that deliver high impact for low effort so you can learn fast and improve flow quickly. A WIP limit and explicit pull policy directly targets the team’s stated wastes (multitasking and review waiting) and can be implemented immediately as a short experiment. This preserves inspection and adaptation while minimizing upfront cost.

A practical way to eliminate waste is to prioritize improvements by impact and effort, then run a small experiment on the best candidate. In this scenario, the largest waste signals are waiting and multitasking, so the first move should be a low-effort change that improves flow through the constrained step (code reviews).

A good next-step sequence is:

  • Select the highest-impact, lowest-effort option
  • Implement it as a short, timeboxed experiment
  • Observe flow metrics (e.g., cycle time, queue time) and adjust

Starting with a large, high-effort initiative delays learning, while lower-impact tweaks won’t materially reduce the identified waste.

It is the highest-impact, lowest-effort change and can be tested quickly to reduce waiting and multitasking.


Question 7

Topic: Delivery

An agile team completes a waste walk and estimates improvement ideas using an impact/effort view.

Which action should the team try first?

Exhibit: Improvement ideas (team estimates)

  • Add WIP limits and an explicit policy for handling interrupts: High impact / Low effort

  • Build a full CI pipeline with automated regression tests: High impact / High effort

  • Run a half-day workshop to standardize story writing: Medium impact / Medium effort

  • Redesign the service architecture to reduce dependencies: Medium impact / Very high effort

  • A. Build a full CI pipeline with automated regression tests

  • B. Add WIP limits and an explicit policy for handling interrupts

  • C. Redesign the service architecture to reduce dependencies

  • D. Run a half-day workshop to standardize story writing

Best answer: B

What this tests: Delivery

Explanation: When choosing waste-reduction actions using impact and effort, start with quick wins: changes that deliver high impact for low effort. This increases learning and throughput sooner, while keeping the cost of experimentation small.

Using an impact/effort approach for continuous improvement, teams typically start with the highest-value, lowest-cost experiments first. A high-impact/low-effort improvement reduces waste quickly, builds momentum, and creates fast feedback on whether the change actually improves flow.

A simple decision rule is:

  • Prefer high impact / low effort first
  • Defer high effort items until you have evidence, capacity, and a clearer ROI

In the exhibit, adding WIP limits plus an explicit interrupts policy is the only option that is both high impact and low effort, making it the best first action.

It is a clear “quick win” (high impact, low effort), maximizing waste reduction quickly with minimal investment.


Question 8

Topic: Delivery

A product team consistently fills each iteration with feature work. Retrospective action items (automation, refactoring, and workflow tweaks) are repeatedly pushed out, and defects are starting to rise. The product owner wants delivery to continue, but the team also needs time to improve.

Which practice best matches an agile approach to balancing delivery and improvement while protecting time for both?

  • A. Increase upfront design and documentation to prevent future rework and create improvement time
  • B. Reserve explicit capacity each iteration for improvement items and plan them like backlog work
  • C. Defer improvement work until after the next major release to maximize feature output now
  • D. Plan a periodic hardening iteration dedicated only to defect fixes and technical debt payoff

Best answer: B

What this tests: Delivery

Explanation: A sustainable way to improve is to make improvement work visible and allocate capacity for it every iteration. By treating improvement items like any other backlog work, the team can protect time for continuous improvement while still delivering features. This reduces the pattern of endlessly deferring improvements and accumulating quality and flow problems.

The core idea is to balance delivery and improvement by explicitly budgeting for improvement work as part of normal delivery, not as an afterthought. When teams fill 100% of capacity with feature work, improvement items (retrospective actions, automation, refactoring) get deprioritized, creating a reinforcing cycle of more defects, more interruptions, and less time to improve.

A practical way to protect time for both is to:

  • Make improvement items visible (backlog/board)
  • Allocate a fixed slice of capacity each iteration for them
  • Plan and track them with the same discipline as feature work

The key takeaway is that continuous improvement is most effective when it is integrated into the team’s regular cadence, not postponed to a future “cleanup” phase.

Allocating and protecting a slice of capacity for improvement each iteration balances flow and enables continuous improvement without stopping delivery.


Question 9

Topic: Delivery

A product team completes a retrospective every two weeks, but the same issues keep resurfacing and few changes stick. You are asked to help the team build a culture of relentless improvement through repeated improvement cycles.

Which action should you NOT take?

  • A. Require management sign-off before trying process changes
  • B. Run small experiments with measures and adjust next cycle
  • C. Visualize improvement work on a board with WIP limits
  • D. Review last retro actions at the start of each retro

Best answer: A

What this tests: Delivery

Explanation: Relentless improvement comes from short, repeatable feedback loops: pick a change, try it, inspect results, and adapt. Lightweight tracking and routinely revisiting outcomes helps improvements stick and builds trust in the cycle. Requiring pre-approval introduces friction that discourages experimentation and learning.

Continuous improvement is sustained by repeating a simple cycle (e.g., identify a problem, try a small change, inspect outcomes, and adjust). In the scenario, the missing ingredient is follow-through and learning across cycles, not more ceremony. Practices like revisiting prior actions, making improvement work visible, limiting WIP, and running measurable experiments increase accountability and enable the team to learn what actually improves flow and quality. Introducing management sign-offs creates a control gate that slows feedback, reduces psychological safety to experiment, and shifts ownership away from the team—undermining a culture of relentless improvement.

Key takeaway: optimize for fast learning loops and team ownership, not approvals.

Adding approval gates reduces team autonomy and slows iterative experimentation that enables continuous improvement.


Question 10

Topic: Delivery

An agile team has received repeated stakeholder feedback during sprint reviews that delivered features are hard to validate and frequently require rework after release. The team wants to use this feedback to drive continuous process improvement.

Which action should the team NOT take?

  • A. Share the planned process change and ask stakeholders to inspect the results in the next review
  • B. Explain that the team’s process is internal and stakeholders should adapt to it
  • C. Capture the feedback and turn it into a small improvement experiment with a success measure
  • D. Invite representative stakeholders to a focused session to clarify validation needs and bottlenecks

Best answer: B

What this tests: Delivery

Explanation: Stakeholder feedback is an input to continuous improvement, especially when it points to systemic rework and validation friction. The team should treat it as actionable data, collaborate to understand the underlying need, and run small experiments to improve outcomes. Telling stakeholders to “adapt” blocks feedback loops and learning.

Continuous improvement relies on short feedback loops with the people who experience the outcomes of the team’s process. When stakeholders report recurring rework and difficulty validating, the team should use that information to identify the underlying causes (unclear acceptance criteria, late discovery, missing examples, insufficient testability) and then make a small, transparent process change to test.

Effective patterns include:

  • Clarify the validation needs with stakeholders and translate them into acceptance criteria/examples.
  • Run a timeboxed experiment (e.g., earlier review of acceptance criteria, tighter slicing, explicit validation checklist) with a measurable indicator.
  • Inspect the results with stakeholders in the next review and adapt again.

The key takeaway is to use stakeholder feedback as data for experiments, not as something to deflect or restrict.

Dismissing stakeholder feedback prevents learning and undermines continuous improvement.

Continue with full practice

Use the PMI-ACP Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PMI-ACP guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026