Try 10 focused PMI-ACP questions on Delivery, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PMI-ACP |
| Topic area | Delivery |
| Blueprint weight | 28% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Delivery for PMI-ACP. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 28% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Delivery
An agile team is building a new customer portal for paying clients (external customers). Internal teams—Compliance and the Support Center—must approve workflows and will operate the portal.
Lately the team is missing iteration goals and rework is increasing. WIP is growing in the “Ready for approval” column even though development WIP limits are enforced. Sprint reviews include external beta users who give positive feedback, but Compliance often responds days later via email with major changes, and conflict between the team and Compliance is escalating.
What is the most likely underlying cause?
Best answer: B
What this tests: Delivery
Explanation: The clues point to late internal feedback and approvals as the driver of rework and growing queue time. Internal customers who approve and operate the solution typically need a tighter collaboration model (shared cadence, quick decision path, early review of constraints) than external customers providing market feedback. Using the same engagement approach for both groups leads to delayed discovery of nonfunctional and regulatory needs.
This is a customer engagement mismatch, not primarily a team execution problem. External customers are validating value and usability in sprint reviews, and they are satisfied—so the product discovery loop for externals is working. The bottleneck and rework are coming from internal customers (Compliance and Support) who act as approvers and operators; they need earlier, higher-frequency collaboration and clear decision/response expectations so constraints are surfaced before work is built.
A better internal-customer approach typically includes:
Key takeaway: segment customers (internal vs. external) and design fit-for-purpose feedback loops for each.
The team is using an external-style feedback loop and not collaborating with internal approvers/operators early and continuously, causing late constraint discovery and rework.
Topic: Delivery
A product director asks your agile team to “increase velocity by 20%” and starts publishing a leaderboard comparing teams by story points completed each iteration. Within two iterations, the team begins splitting stories smaller and inflating estimates; throughput is flat, WIP is rising, and escaped defects increased.
What is the BEST next action to correct this metric anti-pattern?
Best answer: D
What this tests: Delivery
Explanation: The anti-pattern is using velocity as a performance and comparison metric, which predictably drives gaming and hides real delivery signals. The best correction is to realign stakeholders on why metrics exist and switch to a balanced set that supports forecasting and improvement—such as flow (cycle time/throughput/WIP) plus quality indicators. This preserves transparency and reduces incentives to manipulate estimates.
Velocity (story points per iteration) is a team-local planning aid, not a KPI for comparing teams or judging performance. When leaders use it as a target, teams often respond by changing the points rather than improving delivery (splitting stories to look “faster,” inflating estimates), while real outcomes degrade (higher WIP, more escaped defects, flat throughput).
Best next action is to reset the conversation about the metric’s purpose and replace/augment it with metrics that are harder to game and directly reflect delivery health:
Keep any remaining use of velocity strictly for the team’s own forecasting and capacity discussions, not external comparison.
It removes the incentive to game points by shifting to decision-ready measures (e.g., cycle time, throughput, defects) aligned to delivery outcomes.
Topic: Delivery
A product team’s key customer attends reviews and sends frequent “small improvements” during the iteration. The team keeps swapping in these requests immediately, and defects have increased while planned stories regularly slip. The customer is now dissatisfied with quality but continues to push for more changes.
What is the best next step to keep collaboration focused on value and quality rather than scope churn?
Best answer: A
What this tests: Delivery
Explanation: Effective customer collaboration uses frequent feedback to improve outcomes, while protecting flow and quality. The next step is to co-create clear goals and quality expectations (e.g., acceptance criteria/Definition of Done) and agree that new requests are captured and prioritized through the backlog, not swapped in ad hoc. This keeps change responsive but intentional and value-driven.
The core issue is not “too much customer involvement,” but unmanaged feedback that bypasses prioritization and quality safeguards. A healthy next step is to collaborate with the product owner and customer to (1) restate the product/iteration goal, (2) make quality explicit through acceptance criteria and a Definition of Done, and (3) agree on a simple decision path: new ideas go to the backlog and are ordered during refinement/planning based on value and risk.
This preserves customer collaboration while reducing context switching, unplanned work, and rushed testing that drive defects. The team can still respond quickly—by re-prioritizing intentionally at the right cadence rather than continuously reshuffling in-flight work.
It preserves fast feedback while creating shared outcomes and quality expectations, and it channels new ideas into value-based prioritization instead of mid-iteration thrash.
Topic: Delivery
A product team uses a Kanban board and tracks flow metrics weekly. Over the last month, their average lead time increased from 8 days to 16 days, but average cycle time stayed roughly 6 days. During the same period, WIP on the board steadily grew, several items were reopened for rework after late clarification, and daily stand-ups often turn into arguments about what to start next.
Which underlying cause is most likely driving these signals?
Best answer: B
What this tests: Delivery
Explanation: Lead time doubling while cycle time remains stable indicates items are spending more time waiting before (or between) active work, not that execution time is worsening. The growing WIP and frequent conflict about what to start next are classic signs of pushing too much work into the system without explicit pull rules. Adding WIP limits and clear replenishment/prioritization policies reduces queues, improves focus, and lowers lead time.
Cycle time measures how long work takes once started; lead time includes both working time and waiting time. In the scenario, cycle time stays about the same while lead time grows, which strongly suggests that waiting/queue time is increasing. The additional clues—WIP steadily rising and repeated arguments about what to start—indicate the system is allowing too much work to be started and is missing explicit pull policies (including WIP limits and a clear replenishment/prioritization approach). That drives longer queues (higher lead time), context switching, and late rework when items are started before they are truly ready. The most direct improvement opportunity is to limit WIP and make pull/prioritization policies explicit so work starts only when capacity and clarity exist.
Rising lead time with stable cycle time and growing WIP points to increased waiting/queue time from over-starting and unclear pull policies.
Topic: Delivery
A team releases a thin slice of a new “quick reorder” feature to 30 pilot customers to test the hypothesis that it will reduce time-to-reorder. After two weeks, adoption is low and interviews show customers mostly want “reorder from past invoices” instead. The next iteration is planned to add more UI polish to quick reorder.
What is the best agile practice to apply next?
Best answer: D
What this tests: Delivery
Explanation: The pilot data and interviews invalidate the current value hypothesis, so the team should decide whether to pivot or persevere based on validated learning. The next step is to translate that decision into an updated backlog order and scope so investment follows customer value signals rather than the original plan.
Early feedback is most useful when it is compared to a clear hypothesis about customer value and then used to make an explicit decision: continue, adjust priorities, or pivot. Here, low adoption plus consistent interview signals indicate the current slice is not solving the primary customer problem, so continuing with “more of the same” work (UI polish) risks waste. A short pivot/persevere conversation with the product owner and key stakeholders should convert the learning into concrete delivery choices (drop, re-scope, or reorder backlog items) and update the near-term plan accordingly. The key takeaway is to let outcome-focused evidence change priorities before adding more scope to a weak signal.
Validated learning from early feedback should drive a pivot or re-scope and an updated backlog order before investing further.
Topic: Delivery
A Kanban team’s average cycle time has increased over the last month. In the retrospective, they identify waste from multitasking and long waits for code reviews. They quickly estimate impact vs. effort for several improvement ideas.
Exhibit: Improvement options (team estimate)
1) Add WIP limits + explicit pull policy for reviews: Impact HIGH, Effort LOW
2) Build CI/CD pipeline + automated regression suite: Impact HIGH, Effort HIGH
3) Add 2 more developers to the team: Impact MED, Effort MED
4) Timebox daily standup to 10 minutes: Impact LOW, Effort LOW
Which action should the team do first?
Best answer: C
What this tests: Delivery
Explanation: When prioritizing waste reduction, start with changes that deliver high impact for low effort so you can learn fast and improve flow quickly. A WIP limit and explicit pull policy directly targets the team’s stated wastes (multitasking and review waiting) and can be implemented immediately as a short experiment. This preserves inspection and adaptation while minimizing upfront cost.
A practical way to eliminate waste is to prioritize improvements by impact and effort, then run a small experiment on the best candidate. In this scenario, the largest waste signals are waiting and multitasking, so the first move should be a low-effort change that improves flow through the constrained step (code reviews).
A good next-step sequence is:
Starting with a large, high-effort initiative delays learning, while lower-impact tweaks won’t materially reduce the identified waste.
It is the highest-impact, lowest-effort change and can be tested quickly to reduce waiting and multitasking.
Topic: Delivery
An agile team completes a waste walk and estimates improvement ideas using an impact/effort view.
Which action should the team try first?
Exhibit: Improvement ideas (team estimates)
Add WIP limits and an explicit policy for handling interrupts: High impact / Low effort
Build a full CI pipeline with automated regression tests: High impact / High effort
Run a half-day workshop to standardize story writing: Medium impact / Medium effort
Redesign the service architecture to reduce dependencies: Medium impact / Very high effort
A. Build a full CI pipeline with automated regression tests
B. Add WIP limits and an explicit policy for handling interrupts
C. Redesign the service architecture to reduce dependencies
D. Run a half-day workshop to standardize story writing
Best answer: B
What this tests: Delivery
Explanation: When choosing waste-reduction actions using impact and effort, start with quick wins: changes that deliver high impact for low effort. This increases learning and throughput sooner, while keeping the cost of experimentation small.
Using an impact/effort approach for continuous improvement, teams typically start with the highest-value, lowest-cost experiments first. A high-impact/low-effort improvement reduces waste quickly, builds momentum, and creates fast feedback on whether the change actually improves flow.
A simple decision rule is:
In the exhibit, adding WIP limits plus an explicit interrupts policy is the only option that is both high impact and low effort, making it the best first action.
It is a clear “quick win” (high impact, low effort), maximizing waste reduction quickly with minimal investment.
Topic: Delivery
A product team consistently fills each iteration with feature work. Retrospective action items (automation, refactoring, and workflow tweaks) are repeatedly pushed out, and defects are starting to rise. The product owner wants delivery to continue, but the team also needs time to improve.
Which practice best matches an agile approach to balancing delivery and improvement while protecting time for both?
Best answer: B
What this tests: Delivery
Explanation: A sustainable way to improve is to make improvement work visible and allocate capacity for it every iteration. By treating improvement items like any other backlog work, the team can protect time for continuous improvement while still delivering features. This reduces the pattern of endlessly deferring improvements and accumulating quality and flow problems.
The core idea is to balance delivery and improvement by explicitly budgeting for improvement work as part of normal delivery, not as an afterthought. When teams fill 100% of capacity with feature work, improvement items (retrospective actions, automation, refactoring) get deprioritized, creating a reinforcing cycle of more defects, more interruptions, and less time to improve.
A practical way to protect time for both is to:
The key takeaway is that continuous improvement is most effective when it is integrated into the team’s regular cadence, not postponed to a future “cleanup” phase.
Allocating and protecting a slice of capacity for improvement each iteration balances flow and enables continuous improvement without stopping delivery.
Topic: Delivery
A product team completes a retrospective every two weeks, but the same issues keep resurfacing and few changes stick. You are asked to help the team build a culture of relentless improvement through repeated improvement cycles.
Which action should you NOT take?
Best answer: A
What this tests: Delivery
Explanation: Relentless improvement comes from short, repeatable feedback loops: pick a change, try it, inspect results, and adapt. Lightweight tracking and routinely revisiting outcomes helps improvements stick and builds trust in the cycle. Requiring pre-approval introduces friction that discourages experimentation and learning.
Continuous improvement is sustained by repeating a simple cycle (e.g., identify a problem, try a small change, inspect outcomes, and adjust). In the scenario, the missing ingredient is follow-through and learning across cycles, not more ceremony. Practices like revisiting prior actions, making improvement work visible, limiting WIP, and running measurable experiments increase accountability and enable the team to learn what actually improves flow and quality. Introducing management sign-offs creates a control gate that slows feedback, reduces psychological safety to experiment, and shifts ownership away from the team—undermining a culture of relentless improvement.
Key takeaway: optimize for fast learning loops and team ownership, not approvals.
Adding approval gates reduces team autonomy and slows iterative experimentation that enables continuous improvement.
Topic: Delivery
An agile team has received repeated stakeholder feedback during sprint reviews that delivered features are hard to validate and frequently require rework after release. The team wants to use this feedback to drive continuous process improvement.
Which action should the team NOT take?
Best answer: B
What this tests: Delivery
Explanation: Stakeholder feedback is an input to continuous improvement, especially when it points to systemic rework and validation friction. The team should treat it as actionable data, collaborate to understand the underlying need, and run small experiments to improve outcomes. Telling stakeholders to “adapt” blocks feedback loops and learning.
Continuous improvement relies on short feedback loops with the people who experience the outcomes of the team’s process. When stakeholders report recurring rework and difficulty validating, the team should use that information to identify the underlying causes (unclear acceptance criteria, late discovery, missing examples, insufficient testability) and then make a small, transparent process change to test.
Effective patterns include:
The key takeaway is to use stakeholder feedback as data for experiments, not as something to deflect or restrict.
Dismissing stakeholder feedback prevents learning and undermines continuous improvement.
Use the PMI-ACP Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PMI-ACP guide on PMExams.com, then return to PM Mastery for timed practice.