Browse Certification Practice Tests by Exam Family

PMI-RMP: Risk Analysis

Try 10 focused PMI-RMP questions on Risk Analysis, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePMI-RMP
Topic areaRisk Analysis
Blueprint weight23%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Risk Analysis for PMI-RMP. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 23% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Risk Analysis

In a hybrid product launch, marketing, engineering, and procurement leads are re-ranking risks for the next release. In the last workshop, several risks received widely different “high” and “medium” ratings because each lead used a different idea of what schedule and business impact meant. Which action would best reduce bias in the next qualitative prioritization session?

  • A. Define shared probability-impact criteria and calibrate scorers before rating
  • B. Have the sponsor assign final priorities after the discussion
  • C. Prioritize risks by the seniority of the risk owner
  • D. Replace the workshop with EMV calculations for all risks

Best answer: A

What this tests: Risk Analysis

Explanation: Qualitative prioritization is strongest when stakeholders use the same visible scoring rules. Agreed probability-impact definitions and a brief calibration discussion reduce personal interpretation and make rankings more consistent across functions.

The core issue is inconsistent qualitative criteria. When stakeholders each apply their own meaning of terms like “high impact” or “medium probability,” the ranking becomes biased by personal perspective rather than shared project standards. The best response is to use transparent, agreed probability-impact definitions and calibrate the group before scoring. That can include anchored descriptions or examples for schedule, cost, scope, or business effect so everyone evaluates risks the same way. This keeps the analysis qualitative while making it more repeatable and defensible. Escalating priority decisions to a sponsor or changing methods to EMV does not address the immediate problem of inconsistent scoring rules. Transparent criteria are the primary control for reducing bias in qualitative prioritization.

Shared scoring anchors and calibration make qualitative ratings consistent, transparent, and less dependent on personal judgment.


Question 2

Topic: Risk Analysis

On a hybrid healthcare rollout, the organization has low appetite for compliance risk. During qualitative analysis, the privacy lead rates a vendor API-change risk as high/high because she assumes repeat regulator testing will be required; the product owner rates it low/medium because he assumes the vendor will deliver a patch next sprint. Historical data on this vendor is limited. What is the BEST action for the risk manager?

  • A. Have each stakeholder explain assumptions and evidence, then recalibrate together.
  • B. Record the higher rating to reflect the organization’s low risk appetite.
  • C. Move the risk directly to quantitative analysis to remove bias.
  • D. Average the ratings and log the risk as medium.

Best answer: A

What this tests: Risk Analysis

Explanation: The disagreement comes from different assumptions, not just different opinions. The risk manager should coach stakeholders to state the basis for their ratings, compare that basis with available evidence, and then apply the agreed qualitative criteria consistently.

Qualitative risk analysis depends on calibrated judgment. In this scenario, the key problem is that the stakeholders are rating the same risk from different untested assumptions about regulator retesting and vendor patch timing, while historical data is limited. The best action is to make those assumptions explicit, examine any supporting evidence, and then reapply the agreed probability-impact criteria.

  • Surface the assumption behind each rating.
  • Check what evidence supports each assumption.
  • Recalibrate the rating using the common scale.

Choosing the highest score, averaging scores, or jumping to quantitative analysis would hide the source of disagreement instead of improving the quality of the risk rating.

Conflicting ratings are driven by different assumptions, so surfacing and testing those assumptions is the right first step in calibration.


Question 3

Topic: Risk Analysis

A hybrid product launch has completed risk identification. The risk management plan already defines probability and impact scales, urgency ratings, and thresholds for ranking threats. During the analysis meeting, several senior team members want to prioritize risks by personal judgment based on past projects. The risks are recorded, but no priorities have been set. What should the project manager do next?

  • A. Start mitigation for the risk executives fear most.
  • B. Score each risk with the approved probability-impact-urgency matrix.
  • C. Escalate the most severe threat as a current issue.
  • D. Let senior specialists rank risks by experience.

Best answer: B

What this tests: Risk Analysis

Explanation: Once risks are identified and the project already has approved criteria, the next step is qualitative analysis using those criteria. Risks should be ranked by the agreed probability, impact, urgency, and threshold rules, not by personal preference or status.

This is a qualitative risk analysis decision. Because the risk management plan already defines the scoring scales, urgency ratings, and thresholds, the project manager should now apply those pre-established criteria to the identified risks and record the rankings in the risk register. That creates a consistent, defensible basis for deciding which risks need the most attention and which ones may need response planning first.

  • Use the agreed scales and matrix.
  • Compare results to the defined thresholds.
  • Rank the risks based on that analysis.

Moving straight to mitigation skips the analysis step, and treating an uncertain threat as an issue is premature.

It uses the documented criteria and thresholds to produce an objective ranking before response decisions are made.


Question 4

Topic: Risk Analysis

In a hybrid ERP rollout, the team is debating whether late vendor API configuration, failed interface tests, repeated data fixes, and compressed user training are separate threats or one cascading threat chain. Schedule risk exposure has increased for three reviews, the sponsor requires analysis of any path that could delay go-live by more than 10 business days, and historical data for this new API is limited. What is the best action?

  • A. Create a tree diagram of cause-and-impact branches, then assess the highest-delay path.
  • B. Re-score each threat separately in the probability-impact matrix.
  • C. Run a Monte Carlo simulation using the current schedule estimates.
  • D. Escalate the API threat cluster for immediate response planning.

Best answer: A

What this tests: Risk Analysis

Explanation: Tree diagrams are used when the main challenge is understanding how causes and consequences connect. Here, the team needs to see whether the API-related threats form a cascading path that could breach the sponsor’s 10-day threshold before choosing responses or deeper quantification.

This scenario is about analyzing complexity among related threats, not just ranking isolated risks. A tree diagram helps structure the relationship from initiating cause to intermediate events to final project impact. That is the best fit when the team is unsure whether several logged threats are independent or part of one chain, and when limited historical data makes premature quantitative modeling less reliable.

  • Start with the initiating uncertainty.
  • Branch to possible intermediate events.
  • Continue to downstream schedule impacts.
  • Identify the path most likely to exceed the threshold.

Once that structure is clear, the team can decide whether to quantify further, escalate, or plan responses. Simple scoring or immediate escalation would skip the key analysis step.

A tree diagram is best when related uncertainties may share causes and create cascading impacts that must be analyzed before response decisions.


Question 5

Topic: Risk Analysis

A predictive facility-upgrade project has completed risk identification and qualitative analysis. Several threats could delay specific activities, and the schedule logic is current, but the team has not checked whether those activities affect the planned finish date. What should the project manager do next?

  • A. Escalate the most likely delay threat as an active issue.
  • B. Analyze the schedule for critical and near-critical path impacts.
  • C. Close risks on activities with float because they are noncritical.
  • D. Assign response owners and begin mitigation for each delay risk.

Best answer: B

What this tests: Risk Analysis

Explanation: The next step is to use schedule analysis to determine whether the identified threats can actually move the project finish date. Critical and near-critical path information helps the team focus on the delay risks that matter most before choosing responses.

Critical path information becomes relevant when the team is analyzing schedule risk and needs to determine whether uncertainty in specific activities can affect the project completion date. After risks are identified and qualitatively assessed, the next step is to examine the current schedule logic, float, and near-critical paths to see which threats truly drive finish-date exposure. This is part of sensitivity and modeling-oriented schedule risk analysis: not every delayed task threatens the overall end date. Only after understanding that relationship should the team prioritize responses or escalate anything that has actually occurred. A risk tied to a noncritical activity may still need monitoring because float can be consumed and the critical path can shift.

Critical path information is relevant now because schedule risk analysis must show which uncertain delays can affect the project finish date before responses are chosen.


Question 6

Topic: Risk Analysis

A project team rates each identified risk as Very Low, Low, Moderate, High, or Very High for probability and impact, then uses a matrix to decide which risks need attention first. The team does not treat the ratings as precise forecasts. What is this approach called?

  • A. Risk monitoring using triggers and contingency plans
  • B. Risk response planning based on stakeholder risk appetite
  • C. Quantitative risk analysis using expected monetary value
  • D. Qualitative risk analysis using an ordinal probability-impact matrix

Best answer: D

What this tests: Risk Analysis

Explanation: The description matches qualitative risk analysis with ordinal categories such as Very Low through Very High. In PMI-RMP practice, those categories help compare and prioritize risks, but they do not imply exact numeric precision.

The core concept is qualitative risk analysis using an ordinal probability-impact matrix. Ordinal categories show relative order—such as low versus high—so the team can sort and prioritize risks consistently. That supports decisions like which risks need deeper analysis, immediate response planning, or closer monitoring.

What ordinal classification does well:

  • ranks risks relative to one another
  • supports prioritization and escalation decisions
  • avoids claiming exact cost or schedule effects

What it does not do is provide precise forecasts such as expected monetary value or simulation outputs. The closest distractor is quantitative analysis, but that would require numeric modeling rather than category-based ranking.

This uses ordered categories to rank risks for prioritization, not to produce precise numeric predictions.


Question 7

Topic: Risk Analysis

On an infrastructure project, the permitting agency has announced a draft stormwater rule that may take effect next quarter. If approved, the project may need to rework its drainage design and delay construction start. Before deciding whether to escalate this threat, the risk manager wants to group it in the risk register using the agreed nominal categories. Which category best fits this risk?

  • A. Technical
  • B. Project management
  • C. Organization
  • D. Environment

Best answer: D

What this tests: Risk Analysis

Explanation: This threat belongs in the environment category because its source is an external permitting authority. Nominal classification groups risks by where they originate, so the likely redesign effect does not make this a technical or project-management risk.

The key to nominal classification is identifying the source of the uncertainty, not just the impact. In this scenario, the uncertainty comes from a possible change in an external regulatory rule issued by the permitting agency. That makes it an environmental risk. The fact that the consequence could be drainage redesign or schedule delay does not change the category.

Organization risks come from internal governance, staffing, funding, or culture. Project management risks come from planning, estimating, coordination, communication, or control practices. Technical risks come from the solution itself, such as design feasibility, interfaces, performance, or quality. Here, the design rework is the effect; the external rule change is the source.

A good classification helps the team spot patterns, assign suitable owners, and choose better response paths.

An external regulatory change is classified as an environmental risk because the uncertainty originates outside the project and performing organization.


Question 8

Topic: Risk Analysis

A hybrid ERP rollout team rates risks on a 1-5 probability-impact matrix and tags urgency as low, medium, or high. Historical data are sparse, and the sponsor has a low appetite for schedule slippage: any risk that could delay go-live by more than two weeks exceeds threshold. A team member proposes multiplying the ordinal ratings to produce an exact 1-125 ranking for the full risk register. What is the best action?

  • A. Multiply the ratings and rank every risk numerically from highest to lowest.
  • B. Group risks into priority bands and use urgency and threshold breaches to drive follow-up.
  • C. Re-score risks until each one has a unique composite value.
  • D. Convert each 1-5 rating into percentages and delay days for clearer ranking.

Best answer: B

What this tests: Risk Analysis

Explanation: The 1-5 matrix ratings are ordinal, so they are useful for relative prioritization but not for precise numerical ranking. With sparse data and a clear schedule threshold, the better approach is to group risks into action bands and use urgency and threshold breaches to decide which risks need immediate attention or deeper analysis.

This situation is about using qualitative risk analysis correctly. Ordinal classifications such as 1-5 probability and impact scales show order, but they do not justify treating score gaps as exact, measurable differences. When historical data are limited, an exact 1-125 ranking can overstate confidence and make small score differences look meaningful when they are not.

  • Use the probability-impact matrix to place risks into priority bands.
  • Use urgency to distinguish which same-band risks need faster action.
  • Escalate any risk that breaches the sponsor’s stated schedule threshold.
  • Move only the most significant risks to deeper analysis if better data become available.

A precise-looking rank list may seem objective, but it is weaker than a defensible ordinal prioritization tied to thresholds and urgency.

Ordinal ratings support relative prioritization, so the team should use matrix bands plus urgency and stated thresholds instead of claiming exact score precision.


Question 9

Topic: Risk Analysis

On a hybrid product launch project, several threats looked tied on the basic probability-impact matrix. The sponsor therefore set objective weights of schedule 50%, compliance 30%, and cost 20%. Using weighted priority = probability * (0.5 * schedule + 0.3 * compliance + 0.2 * cost), with impacts rated 1-5, which risk should be ranked highest?

Risk                         Probability  Schedule  Compliance  Cost
API vendor delivery delay    40%          5         1           2
Data retention rule error    25%          2         5           2
Test environment instability 50%          3         1           3
Specialist availability gap  30%          4         2           4
  • A. Specialist availability gap
  • B. Data retention rule error
  • C. API vendor delivery delay
  • D. Test environment instability

Best answer: C

What this tests: Risk Analysis

Explanation: Weighted priority is used when a simple probability-impact score does not reflect stakeholder emphasis across different objectives. Applying the stated schedule, compliance, and cost weights shows that the API vendor delay has the highest combined exposure and should be ranked first.

Weighted priority helps when a basic matrix does not separate risks well and some project objectives matter more than others. Here, schedule has the greatest weight, so a risk with strong schedule impact can outrank a risk with higher probability or stronger compliance impact alone.

\[ \begin{aligned} \text{API vendor delay} &= 0.40(0.5\times5+0.3\times1+0.2\times2)=1.28\\ \text{Rule error} &= 0.25(0.5\times2+0.3\times5+0.2\times2)=0.73\\ \text{Test instability} &= 0.50(0.5\times3+0.3\times1+0.2\times3)=1.20\\ \text{Specialist gap} &= 0.30(0.5\times4+0.3\times2+0.2\times4)=1.02 \end{aligned} \]

The highest weighted score gets the top rank. This is more useful than an unweighted matrix because it reflects the sponsor’s relative priorities across objectives.

Its weighted score is highest after applying the stated schedule, compliance, and cost weights and then multiplying by probability.


Question 10

Topic: Risk Analysis

A project’s risk management plan states that any risk in the red zone of the probability-impact matrix must be escalated, and any risk with major scoring disagreement requires further analysis. A supplier-delay risk is rated in the red zone, and reviewers disagree widely on its schedule impact. Which action best matches how this matrix result should be used?

  • A. Implement a response without further review
  • B. Reduce the rating until consensus is reached
  • C. Record the score and revisit later
  • D. Trigger escalation and additional analysis

Best answer: D

What this tests: Risk Analysis

Explanation: A matrix result is not just a ranking tool; it can also trigger defined next steps. Here, the risk crosses the stated escalation threshold and the disagreement in impact estimates signals that the current rating needs deeper analysis.

In PMI-RMP practice, a probability-impact matrix supports qualitative prioritization, but its output must be interpreted against the risk management plan’s thresholds and rules. In this case, two separate triggers are present: the risk is in the red zone, which requires escalation, and the experts disagree significantly on impact, which means the rating is uncertain and should be refined through additional analysis. The right use of the matrix result is therefore not to wait, downgrade, or jump straight to implementation, but to escalate and improve the analysis basis. A matrix score guides decision making; it does not replace threshold-based governance or the need to resolve material uncertainty.

The rating exceeds the escalation threshold, and the uncertainty in scoring also justifies deeper analysis.

Continue with full practice

Use the PMI-RMP Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PMI-RMP guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026