Try 10 focused PMI-PBA questions on Evaluation, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PMI-PBA |
| Topic area | Evaluation |
| Blueprint weight | 10% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Evaluation for PMI-PBA. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 10% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Evaluation
During user acceptance testing for a sales reporting portal, the regional director says the solution is unacceptable because it does not show a monthly trend chart. The test report shows all tests passed against the approved acceptance criteria for requirement R-18, which specifies only current-month totals. The signed requirements baseline version 3.1 and traceability artifact show no requirement for trend charts. What should the business analyst recommend next?
R-18 as failed and reopen user acceptance testingBest answer: B
What this tests: Evaluation
Explanation: The solution must be validated against approved, traceable acceptance criteria, not against a new expectation that is absent from the baseline. When test evidence and stakeholder expectations conflict this way, the next step is controlled change with impact analysis, while preserving the original test result and baseline version.
The core concept is to separate a true solution defect from a new or changed requirement. Here, the test evidence shows the solution satisfies the approved acceptance criteria for R-18, and the traceability record confirms that a monthly trend chart was never part of the signed baseline. That means the conflict is not a test failure against the current requirement set; it is a newly surfaced stakeholder expectation.
Reopening testing or editing the baseline immediately would confuse evaluation results with scope change.
Because the solution passed the baselined acceptance criteria and the requested chart is not traceable to an approved requirement, it should be handled through controlled change.
Topic: Evaluation
After UAT for a customer self-service portal, the BA drafts this deployment sign-off note:
Decision: Approved
Comments: Minor issues remain. Compliance wants one report label corrected.
Operations will approve if failed-email alerts are fixed within 30 days after go-live.
Product owner supports Friday release.
Next step: Proceed to deployment.
What is the most important improvement to this note?
Best answer: A
What this tests: Evaluation
Explanation: The main flaw is that the note says “Approved” even though at least one stakeholder has given only conditional approval and another has raised an exception. For deployment sign-off, the BA must document the precise decision, any conditions or exceptions, and how those items will be handled.
When obtaining sign-off to proceed with deployment, the decision record must clearly show whether approval is unconditional, conditional, rejected, or approved with exceptions. In this note, “Decision: Approved” is misleading because Operations has not fully approved; it has approved only if a specific issue is resolved within a stated period, and Compliance has identified an unresolved item that may need to be treated as an exception.
A complete sign-off record should capture:
Supporting details like dates, evidence, and benefits are useful, but they do not fix the core ambiguity about whether deployment is truly authorized and under what terms.
The note incorrectly collapses conditional and exception-based decisions into a simple approval, so it must explicitly record the actual approval status and terms.
Topic: Evaluation
A business analyst is preparing for a go-live decision meeting on a new claims system. Quality assurance found 18 issues; most are minor, but 3 prevent required audit-trail retention and conflict with approved requirements. The sponsor, compliance manager, and operations lead need a clear basis to decide whether to proceed, defer scope, or delay release. Which summary format is most appropriate?
Best answer: C
What this tests: Evaluation
Explanation: Stakeholders making a go-live decision need a concise summary of gaps and deltas, not raw QA detail. The best format ties each significant finding to the approved requirement, shows business impact and severity, and makes the decision points visible.
The core concept is to communicate QA findings in a form that supports stakeholder action. In this case, decision makers must resolve discrepancies between approved requirements and the developed solution, so the summary should emphasize the affected requirement, the nature of the gap, the business or compliance impact, the severity, and the disposition options. That is more useful than technical defect detail because the meeting is about release and scope decisions, not day-to-day issue fixing.
A good summary for this situation would let stakeholders quickly see:
Detailed logs and evidence may support analysis, but they do not frame the decision as clearly as a gap-and-delta summary tied to requirements.
This format translates QA findings into requirement-based business gaps and the decision each stakeholder must make.
Topic: Evaluation
A business analyst is validating whether a new discount-approval feature satisfies this acceptance criterion: every rejected discount request must display the rejection reason in the customer record, and complete routing must finish within 5 minutes. UAT ran 12 scenarios; 11 passed, but one rejection scenario failed because the reason was blank. A walkthrough confirmed the routing design, and the related defect remains open. The sponsor asks whether sign-off can proceed. Which conclusion is INCORRECT?
Best answer: D
What this tests: Evaluation
Explanation: Test evidence is sufficient only when it shows the requirement met its stated acceptance criteria. Here, one UAT failure and an open defect directly map to the requirement, so the evidence does not support sign-off even though most scenarios passed and the walkthrough looked sound.
The core concept is validating solution evidence against the full acceptance criteria, not against a majority pass rate. In this scenario, the requirement says every rejected request must display a rejection reason. Because one rejection scenario failed and the related defect is still open, the available evidence shows the requirement is not yet satisfied.
A walkthrough can strengthen understanding of intended behavior, but it is not a substitute for execution evidence when UAT has already shown a gap. A sound evaluation approach is to trace the failed test and defect to the requirement, fix the issue, and run targeted retesting before recommending sign-off.
The key takeaway is that one unresolved failure against an explicit criterion outweighs a generally strong test summary.
A high overall pass rate does not prove satisfaction when an open defect still violates an explicit acceptance criterion.
Topic: Evaluation
Three months after a new customer self-service portal went live, a business analyst submitted a 20-page evaluation report to the sponsor and governance board. The report lists defects closed, training attendance, and survey comments, but each review meeting ends with requests for more analysis before approving the next rollout. Board members say they still cannot tell whether the portal justified the investment or what should be changed for future releases. What is the most likely underlying BA gap?
Best answer: B
What this tests: Evaluation
Explanation: The issue is not lack of activity data; it is lack of decision-ready communication. For sponsors and governance bodies, valuation results must show whether expected benefits were achieved, where gaps remain, what was learned, and what action is recommended.
In solution evaluation, the BA should communicate valuation findings in a form that helps sponsors and governance bodies make decisions. Here, the report contains operational facts, but it does not connect those facts to the business case or value proposition. That is why approval churn continues: decision makers still cannot see whether benefits were realized, why performance differs from expectations, or what should happen next.
More raw data would not solve this problem if the results are still not framed around realized value and future decisions.
Sponsors and governance bodies need decision-ready valuation results tied to expected benefits, variances, lessons learned, and next actions.
Topic: Evaluation
A retailer deployed a self-service returns portal. The business case said the portal would reduce average refund cycle time from 8 days to 3 days and cut customer calls about return status by 30% within 90 days. Three months after launch, the sponsor asks the business analyst to determine whether the solution delivered the original value proposition. Which evidence would best support that evaluation?
Best answer: D
What this tests: Evaluation
Explanation: The best evidence is a benefits realization view that compares actual post-implementation results with the original baseline and target measures. That directly tests whether the deployed solution achieved the promised business value, not just whether it was built, accepted, or used.
This is a post-implementation value evaluation question. When the goal is to compare observed business outcomes with the original value proposition, the strongest evidence is a valuation output that ties actual performance back to the business case measures. In this scenario, the relevant measures are refund cycle time and return-status call reduction, each with a defined baseline and target within 90 days.
A sound evaluation compares:
That shows whether the solution delivered measurable business benefits. Evidence such as testing completion, training rates, or raw usage may indicate readiness, quality, or adoption, but they do not by themselves prove that the expected business value was realized.
Realized value is best validated by comparing actual post-deployment business outcomes to the business case targets and original baselines.
Topic: Evaluation
During a pilot of a claims system, supervisors refuse sign-off because agents cannot route escalated claims for supervisor review. QA confirms the workflow behaves exactly as documented in the approved process model, all traced user stories passed UAT, and the traceability matrix shows no requirement or acceptance criterion for supervisor routing. Users say they assumed that step would be included because it exists today. What is the most likely underlying BA gap?
Best answer: C
What this tests: Evaluation
Explanation: This is a missing requirement problem. The strongest clue is that the solution behaves as approved, traced requirements passed UAT, and no requirement or acceptance criterion exists for the missing capability.
When stakeholders say the solution does not meet a business need, first check whether the missing capability was actually specified and traced. Here, QA verified the delivered workflow matches the approved process model, the traced stories passed UAT, and the traceability artifact shows no requirement for supervisor routing. That means the gap is between the business need and the requirements baseline, not between the requirements and the implemented solution.
A poor implementation would contradict an approved requirement. Misunderstood acceptance criteria would usually show ambiguity or disagreement about how an existing requirement should be verified. In this case, the capability is absent from the approved requirements set, so the main issue is an omitted requirement that now needs formal analysis and change control.
The solution matches approved requirements and tests, so the unmet need points to a missing requirement rather than a build or testing problem.
Topic: Evaluation
A bank is preparing to deploy a new loan-origination solution. The sponsor asks for stakeholder sign-off after a successful demo, but the compliance manager notes that some critical conditions may still be unresolved. Which evidence best shows that sign-off is premature?
Best answer: C
What this tests: Evaluation
Explanation: Sign-off should be based on evidence that the delivered solution meets agreed acceptance criteria and is operationally ready. If mandatory compliance criteria are still unvalidated and related transition controls are open, stakeholders do not yet have a sound basis to approve deployment.
The core concept is solution readiness for sign-off. In this scenario, the strongest evidence is the artifact that compares agreed acceptance criteria with actual validation results and shows whether critical operational conditions are closed. When mandatory compliance criteria lack passed test evidence and the supporting transition controls are still open, sign-off is premature because the solution has unresolved conditions that could block safe deployment or regulatory acceptance.
Useful readiness evidence typically confirms:
Participation, progress, and training metrics may be useful context, but they do not prove that required conditions for approval have been satisfied.
Unpassed mandatory acceptance criteria tied to open transition controls show the solution is not yet ready for sign-off and deployment.
Topic: Evaluation
Three months after a new online claims portal is deployed, call-center volume drops 22%, exceeding the business case target of 15%. The sponsor wants to report full savings from the portal. The BA also learns that a severe storm reduced overall claim submissions and operations temporarily cut phone support hours during the same period. What should the BA do first?
Best answer: A
What this tests: Evaluation
Explanation: The apparent improvement may be inflated by events unrelated to the deployed solution. Before reporting realized value, the BA should trace the metric back to business case assumptions and dependencies, then assess whether external factors distorted the outcome.
When evaluating a deployed solution, realized value should be attributed to the solution only after checking whether external conditions affected the measured result. Here, lower claim volume from the storm and reduced phone support hours can both change call-center demand independently of the portal. The right response is to use traceability from the business case metric to its assumptions, dependencies, and related operational conditions, then perform impact analysis before finalizing the evaluation.
This protects baseline discipline by keeping the original value target intact while documenting why actual results may not be fully caused by the solution. Acceptance of the solution and document versioning are important, but they do not replace analysis of distorted benefit data. The key takeaway is that evaluation must distinguish true solution value from outside influences.
This is correct because the BA must use traceability and impact analysis to separate solution value from outside influences before claiming business-case benefits.
Topic: Evaluation
A business analyst is reviewing user acceptance test results for a new order-entry solution. The signed acceptance criteria for order submission, pricing, and approval routing all passed. However, sales managers say the solution is still unacceptable because they expected a quick-copy feature for repeat orders. The baselined requirements package and traceability records do not include that feature, and deployment is scheduled for next week. What should the business analyst recommend next?
Best answer: A
What this tests: Evaluation
Explanation: When test evidence passes but stakeholders still object, the BA should reconcile the objection against the approved requirements baseline and acceptance criteria. If the expectation was never specified, it is a gap between stakeholder expectation and documented scope, so the next step is impact analysis and controlled change evaluation before sign-off.
The core concept is to validate the solution against agreed acceptance criteria while managing any newly surfaced expectation through formal evaluation. Here, the test evidence shows the solution satisfies the baselined requirements, but stakeholders are expressing a business need that was not captured in the approved requirements package.
The best next step is to:
This protects requirement quality and traceability while still taking stakeholder concerns seriously. Simply accepting the solution ignores adoption risk, but immediately changing the solution bypasses control and impact assessment.
This addresses the conflict by validating the solution against approved criteria while formally evaluating the unmet stakeholder expectation as a potential change.
Use the PMI-PBA Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PMI-PBA guide on PMExams.com, then return to PM Mastery for timed practice.