Browse Certification Practice Tests by Exam Family

PMI-ACP: Product

Try 10 focused PMI-ACP questions on Product, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePMI-ACP
Topic areaProduct
Blueprint weight19%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Product for PMI-ACP. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 19% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Product

A product team has spent three iterations building an “auto-recommendations” feature for an e-commerce site. An MVP released to 5% of users was expected to lift conversion by 2%, but telemetry shows a 0.1% change and support tickets increased. The product owner suspects the problem is actually slow checkout performance and wants to redirect the team.

Which practice best supports deciding whether to stop investing in this low-value work and redirect effort?

  • A. Hold a team retrospective focused on improving the development process
  • B. Re-estimate the remaining stories to improve forecast accuracy
  • C. Increase WIP limits to finish the feature sooner and re-evaluate later
  • D. Run a pivot-or-persevere review using pre-defined success metrics and kill criteria

Best answer: D

What this tests: Product

Explanation: This situation calls for an explicit stop-or-continue decision based on validated learning, not more delivery effort. Using pre-agreed outcome metrics and kill criteria (often framed as pivot-or-persevere) makes it clear when the investment is not producing sufficient value. That allows redirecting capacity to higher-value work, such as checkout performance.

To decide when to stop investing in low-value work, use hypothesis-driven delivery: define the desired outcome, measure it with real user/operational data, and make a pivot-or-persevere decision against an agreed threshold. Here, the MVP’s expected conversion lift is not materializing and negative signals (support tickets) are rising, indicating poor value and/or unintended consequences. A pivot-or-persevere review with explicit kill criteria helps the product owner avoid sunk-cost bias, stop further build-out, and redirect the team to the more likely value driver (checkout performance) while preserving learning captured from the experiment. The key is basing the decision on outcomes and evidence rather than how much has already been built.

A timeboxed experiment with explicit success thresholds enables an evidence-based stop/pivot decision and avoids sunk-cost continuation.


Question 2

Topic: Product

A team is building an employee self-service portal. After three iterations, each increment has been deployed, but analytics show <5% adoption and HR call volume has not decreased (the agreed outcome).

Constraints:

  • Enrollment opens in 2 weeks, so HR needs a usable flow more than additional features.
  • The team’s working agreement is to validate value with end users every iteration, but the last two reviews were skipped.
  • The team’s Kanban policy sets a WIP limit of 4, yet 9 items are currently in progress.

What is the BEST next action?

  • A. Relax the definition of done to deliver more features before enrollment
  • B. Ask HR to finalize detailed requirements and lock scope through enrollment
  • C. Start the remaining enrollment items in parallel to hit the date
  • D. Run a quick end-user usability test on the current increment and re-prioritize the backlog based on findings

Best answer: D

What this tests: Product

Explanation: The increment is being delivered but not creating the intended outcomes, which is a clear signal that it is not delivering value. The best next step is to re-establish a fast feedback loop with real end users, learn what is blocking adoption, and then adapt backlog priorities to maximize value within the enrollment deadline.

When increments are “done” but outcomes (adoption, reduced calls, improved task completion) do not move, the problem is usually value discovery/fit rather than delivery volume. The fastest corrective action is to validate the current increment with the target users, identify what prevents them from succeeding, and then adjust the backlog (and acceptance criteria) to optimize for the outcome that stakeholders care about. This also supports flow: by learning what to change next, the team can stop starting low-confidence work and focus on finishing the highest-value improvements within WIP limits. The key takeaway is to use evidence-based feedback to adapt the next increment, not to push more scope or weaken quality controls.

Because the increment is not producing the agreed outcomes, immediate user validation guides backlog changes toward value before starting more work.


Question 3

Topic: Product

An agile team has released two increments of a self-service analytics portal. The product goal targets 55% user activation and improved satisfaction, but activation is stuck at 30% and customer feedback says “setup is confusing.” Stakeholders are requesting several new dashboard features before a quarterly review in 3 weeks. The team is already at its WIP limit and has a working agreement not to add new scope mid-iteration unless it addresses the product goal.

As the product owner, what is the BEST next action?

  • A. Add the requested dashboard features now to protect stakeholder confidence
  • B. Review adoption data and feedback, then re-prioritize to an activation-focused experiment
  • C. Dedicate the next iteration only to refactoring and defect reduction before changing priorities
  • D. Increase the WIP limit so the team can deliver activation fixes and new features together

Best answer: B

What this tests: Product

Explanation: When results like adoption and satisfaction are off track, the next step is to inspect real usage and feedback and adapt the backlog toward the outcome. An activation-focused experiment keeps change small, measurable, and aligned to the product goal while respecting WIP and team agreements. This also creates a clear narrative for the upcoming review based on learning and targeted impact.

Outcome-based product management prioritizes targeted results (e.g., activation, satisfaction) over output (more features). Here, the key signal is the activation gap and specific feedback about confusing setup, so the best next action is to inspect the data and customer input, form a measurable hypothesis, and re-order backlog work toward a small experiment that can validate improvement quickly.

Practical next steps:

  • Identify where users drop off (activation funnel step) and the top friction points
  • Define a measurable hypothesis (e.g., “simplifying setup increases activation from 30% to X%”)
  • Re-prioritize to the smallest change that tests the hypothesis within WIP limits
  • Use the next review to share learning, outcomes, and the updated plan

Adding more features or expanding WIP increases risk and delays learning; the goal is to restore alignment to adoption and satisfaction as soon as possible.

It directly inspects the outcome gap and adapts the backlog to the smallest testable changes aimed at improving activation and satisfaction.


Question 4

Topic: Product

Which agile term best describes a short, shared objective for an iteration that aligns the team’s planned increment with current business priorities and customer needs?

  • A. Iteration (Sprint) goal
  • B. Definition of Done
  • C. Acceptance criteria
  • D. WIP limit

Best answer: A

What this tests: Product

Explanation: An iteration (Sprint) goal is the concise, shared outcome the team aims to achieve in the iteration, expressed in business/customer terms. It helps ensure the increment being built is coherent and aligned to the most important current priorities, rather than just a collection of unrelated backlog items.

The core concept is an iteration (Sprint) goal: a short statement of the value/outcome the iteration’s increment should deliver. By framing planned work around a single objective tied to business priorities and customer needs, the team and product representative can select, trade off, and negotiate scope while still protecting the intended outcome. This goal improves alignment and decision-making during the iteration (for example, when new information emerges) because progress is measured against achieving the goal, not merely completing tasks. In contrast, other terms focus on quality criteria, item-level validation, or flow control rather than setting a value-based objective for the increment.

It provides a single, value-focused objective that guides which backlog items are selected to form the increment.


Question 5

Topic: Product

A cross-functional team is 5 days into a 2-week iteration. Several high-value stories depend on a security team completing a threat-model review before the stories can meet the Definition of Done. The security team says their earliest review slot is in 10 business days, which threatens delivering a usable increment.

Which action should the agile practitioner NOT take?

  • A. Mandate weekend overtime to keep all dependent stories in-scope
  • B. Reorder work to deliver independent stories first
  • C. Collaborate with security to secure an earlier review slot
  • D. Use a stub and feature toggle for a thin vertical slice

Best answer: A

What this tests: Product

Explanation: When a dependency threatens an increment, the goal is to mitigate risk through collaboration, replanning, and incremental delivery while still meeting quality criteria. Reordering work, enabling integration through stubs/feature toggles, and actively coordinating with the dependency owner all reduce schedule risk and improve transparency. Forcing overtime to keep all dependent scope is an anti-pattern that masks the dependency and increases quality risk.

A delivery-threatening dependency is best handled by making it visible and choosing mitigations that preserve a potentially shippable increment (or at least a usable, testable increment) without compromising quality. Practical mitigations include replanning to pull forward independent work, using safe techniques such as stubs/mocks and feature toggles to integrate and validate incrementally, and collaborating with the dependency-owning group to negotiate timing, entry criteria, or a smaller reviewable slice. In contrast, “fixing” the problem by pushing the team into overtime keeps the risky scope in place and often creates downstream defects and rework, while the external dependency still remains unresolved.

Key takeaway: mitigate dependencies through transparency, collaboration, and slicing—not heroics.

Using overtime to “force” the plan preserves scope over flow and learning, and doesn’t mitigate the dependency risk.


Question 6

Topic: Product

At the end of an iteration, the team says the increment is ready to ship. All planned stories are coded, but two automated regression tests are failing and the security scan required by the team’s definition of done (DoD) has not been run yet. The product owner has not reviewed the increment against acceptance criteria.

Which agile practice/principle best determines whether the increment is potentially shippable?

  • A. Use iteration burndown completion to justify shipping
  • B. Ship now and log defects to maximize early feedback
  • C. Verify DoD is met and obtain product owner acceptance
  • D. Defer remaining quality work to a hardening iteration

Best answer: C

What this tests: Product

Explanation: “Potentially shippable” is a quality-and-acceptance decision, not a schedule or progress report. If the DoD is not fully met (for example, failing regression tests or skipped required scans), the increment is not in a releasable state. Acceptance is then confirmed by the product owner reviewing the increment against acceptance criteria.

The core concept is that a potentially shippable (potentially releasable) increment exists only when it meets the team’s shared quality bar (definition of done) and the product owner can accept the completed work against agreed acceptance criteria. In the scenario, failing regression tests and an unperformed required security scan mean the DoD is not satisfied, so the increment should not be treated as shippable yet.

A practical check is:

  • Confirm all DoD items are complete for the increment (build, tests, scans, documentation as agreed).
  • Validate each story meets its acceptance criteria.
  • Obtain product owner acceptance based on the demonstrated working product.

Progress indicators (like a burndown) can’t override unmet quality and acceptance conditions.

An increment is potentially shippable only when it meets the DoD and is accepted against agreed acceptance criteria.


Question 7

Topic: Product

An agile product team has missed the last three iteration goals. Their board shows more items started than finished, and several stories are reopened after review due to misunderstandings about what “done” means. During iteration planning, developers argue about whether items are “small” or “big,” but the team does not use story points or t-shirt sizes; the tech lead provides time estimates and the team commits based on those.

Which underlying cause is most likely driving these symptoms?

  • A. Stakeholders are changing priorities mid-iteration and causing scope churn
  • B. The team has not set strict WIP limits on the board
  • C. The team is not sizing work collaboratively using relative estimation and shared reference stories
  • D. The definition of done is missing and must be documented before planning

Best answer: C

What this tests: Product

Explanation: The clues point to a lack of shared understanding and inconsistent sizing during planning, combined with commitments based on one person’s time estimates. Collaborative relative estimation (story points or t-shirt sizes) builds shared context, exposes uncertainty, and improves forecasting. Without it, teams often start too much work, miss goals, and experience more rework and conflict.

Relative estimation is a collaborative technique (e.g., story points, t-shirt sizing) that helps the whole team form a shared understanding of work size and complexity using reference items. In the scenario, planning conflict (“small vs. big”), commitments based on a tech lead’s time estimates, growing WIP, and rework from misunderstandings all indicate the team is committing without a shared sizing baseline and common assumptions.

A practical fix is to collaboratively size refined backlog items using relative units and a few reference stories, then use that information to make realistic iteration commitments and prompt earlier conversations when an item is too large or unclear. The core issue is not the board mechanics or external churn; it’s the lack of collaborative relative sizing that aligns expectations and reduces hidden complexity.

Relying on a single person’s time estimates instead of collaborative relative sizing creates poor shared understanding, leading to overcommitment, churn, and rework.


Question 8

Topic: Product

Midway through a 2-week iteration, an agile team discovers their highest-priority story depends on an external identity service owned by another team, and that service will not be available until after the iteration ends. The product owner still wants the team to deliver a usable increment by the end of the iteration.

Which agile technique best mitigates this dependency while protecting the ability to deliver an increment?

  • A. Escalate to management to force the other team to expedite the service
  • B. Increase work in progress so the team stays fully utilized while waiting
  • C. Add a time buffer and re-baseline the iteration plan to account for the delay
  • D. Split the story into a thin vertical slice and use a stub/mock for the identity service

Best answer: D

What this tests: Product

Explanation: When a dependency threatens an iteration goal, the most effective mitigation is to reduce or remove the dependency for the current increment. Splitting the work into a thin vertical slice and using a stub/mock lets the team validate the workflow and deliver usable value now, then integrate with the real service when it becomes available.

A core agile approach to managing threatening dependencies is to preserve flow of value by decoupling work for the near term. If an external component will not be ready in time, collaborate with the product owner to slice the backlog item into a thin, end-to-end increment that can run independently, and use a stub/mock (or similar test double) to simulate the dependent service. This enables working software, early feedback on the user journey, and keeps integration as a focused follow-up once the real dependency is available. The key is delivering a coherent increment of value rather than waiting idle or pushing schedule risk downstream.

Story slicing plus service stubbing decouples delivery from the external team so a potentially shippable increment can still be produced.


Question 9

Topic: Product

Which term best describes forecasting delivery by using historical throughput or cycle time to generate a range of likely outcomes (for example, a 50th/85th percentile date) rather than a single “exact” date?

  • A. Sprint burndown chart
  • B. Monte Carlo forecasting
  • C. Gantt chart schedule baseline
  • D. Deterministic velocity projection

Best answer: B

What this tests: Product

Explanation: Monte Carlo forecasting uses historical throughput or cycle time to simulate many plausible futures and then visualizes the results as probabilities or percentile ranges. This supports forecasting while avoiding false precision because it communicates uncertainty explicitly. It is especially useful when work item sizes and flow vary.

To forecast without false precision, agile teams prefer probabilistic forecasting based on observed flow (throughput and cycle time) rather than a single “most likely” date. Monte Carlo forecasting takes historical data, runs many randomized simulations (sampling observed throughput or cycle times), and summarizes the outcomes as a distribution. The resulting visualization typically communicates likelihoods (for example, “85% chance by this date” or “most outcomes fall between these dates”), which matches real delivery uncertainty and enables better risk-based decisions than a single-point commitment.

It uses many simulated outcomes from historical data to produce probabilistic ranges instead of a single precise prediction.


Question 10

Topic: Product

An agile team releases an increment that redesigns the password reset experience for a SaaS product. The product goal for the increment is to reduce customer effort and lower support demand for password resets. Stakeholders ask for a single metric to confirm whether this increment delivered value over the next month.

Which measure best optimizes for outcome-based value delivery?

  • A. Password-reset related support tickets per 1,000 active users
  • B. Story points completed versus the iteration plan
  • C. Average cycle time from “in progress” to “done” for password reset stories
  • D. Defect density found in QA for the password reset module

Best answer: A

What this tests: Product

Explanation: Outcome-oriented measures show whether an increment changed real user behavior or business results, not just how much was built. Support tickets for password resets are a direct proxy for customer effort and demand on support, and normalizing by active users makes the trend comparable across time. This most clearly indicates value delivery for this specific product goal.

To measure value delivery, choose an outcome measure that reflects the product goal and can be observed after release (behavior, demand, revenue, risk reduction, satisfaction). In this scenario, the intended value is reduced customer effort and fewer support interactions for password resets, so the most meaningful indicator is whether password-reset support demand actually dropped.

Normalizing by active users (tickets per 1,000 active users) avoids false conclusions caused by changes in overall usage or seasonality, making it a better outcome metric for comparing before/after the increment. Delivery and quality metrics can still be useful, but they don’t, by themselves, demonstrate that customers experienced the intended benefit.

Key takeaway: measure the impact of the increment on the user/business outcome it targeted.

It directly measures the customer/business outcome the increment was intended to improve, normalized for usage changes.

Continue with full practice

Use the PMI-ACP Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PMI-ACP guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026