Browse Certification Practice Tests by Exam Family

PMI-RMP Practice Test

Practice PMI-RMP with free sample questions, timed mock exams, and detailed explanations in PM Mastery.

PMI-RMP is PMI’s risk-management certification for practitioners who turn uncertainty into clearer decisions through thresholds, analysis, response planning, and disciplined monitoring. If you are searching for PMI-RMP sample questions, a practice test, mock exam, or simulator, this is the main PM Mastery page to start on web and continue on iPhone or Android with the same PM Mastery account.

Use this page when risk management is a primary professional responsibility rather than one knowledge area inside a broader PM credential. PMI-RMP is the stronger fit when you need deeper practice with risk appetite, thresholds, identification, qualitative and quantitative analysis, responses, triggers, residual risk, secondary risk, reporting, and closure.

Interactive Practice Center

Start a practice session for PMI Risk Management Professional (PMI-RMP) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same PM Mastery account you use on web. The same PM Mastery subscription works across web and mobile.

Free diagnostic: Try the 115-question PMI-RMP full-length practice exam before subscribing. Use it as one risk-decision baseline, then return to PM Mastery for timed mocks, topic drills, explanations, and the full PMI-RMP question bank.

What this PMI-RMP practice page gives you

  • A direct route into PM Mastery practice for PMI-RMP.
  • Topic drills and mixed sets across risk strategy, identification, analysis, response, monitoring, and closure.
  • Detailed explanations that show why the strongest risk answer is more decision-ready, better governed, or better aligned to risk thresholds.
  • 24 on-page sample questions plus access to the full PMI-RMP practice bank in PM Mastery.
  • A clear free-preview path before you subscribe.
  • the same PM Mastery account across web and mobile

PMI-RMP exam snapshot

For the latest official exam details and requirements, see: https://www.pmi.org/certifications/risk-management-rmp

The snapshot below summarizes PMI’s current PMI-RMP certification page and the PMI-RMP Exam Content Outline and Specifications, updated May 2022. Check PMI directly before booking because exam details can change.

Official source check: Last checked May 5, 2026 against PMI's public PMI-RMP certification page.

PMI's public page lists 115 questions, 150 minutes, English/Arabic/Chinese (Simplified) language availability, and the five weighted risk domains used below. Confirm current appointment rules and eligibility directly with PMI before booking.

  • Vendor: PMI
  • Official exam name: PMI Risk Management Professional (PMI-RMP)
  • Exam code: PMI-RMP
  • Items: 115 total
  • Scored vs pretest: 100 scored plus 15 unscored pretest items
  • Exam time: 150 minutes
  • Question types: multiple-choice and multiple-answer select
  • Language availability shown by PMI at review time: English, Arabic, and Chinese (Simplified)

PMI-RMP questions usually reward the option that makes risk work more decision-ready by clarifying thresholds, strengthening analysis quality, and assigning actionable responses rather than documenting uncertainty passively.

Topic coverage for PMI-RMP practice

DomainWeightTarget scored items (out of 100)
Risk Strategy and Planning22%22
Risk Identification23%23
Risk Analysis23%23
Risk Response13%13
Monitor and Close Risks19%19

Strategy, identification, and analysis together drive most of the exam. The simulator reflects that by giving you repeated practice with thresholds, assumptions, stakeholder risk attitudes, risk data quality, exposure trends, and the next best risk-management action.

Risk decision filters for PMI-RMP scenarios

Use these filters when two answers both sound reasonable. PMI-RMP questions often reward the answer that turns uncertainty into a governed decision, not the answer that simply names a risk artifact.

Scenario signalFirst checkStrong answer usually…Weak answer usually…
Stakeholders disagree about exposureWhich threshold, appetite, or tolerance applies?Reconnects the discussion to approved risk criteria and documents the decision basis.Treats the loudest stakeholder as the decision rule.
A new threat or opportunity appearsIs the risk statement clear enough to analyze?Clarifies cause, event, effect, owner, assumptions, and timing before choosing a response.Jumps straight to mitigation without defining the risk.
Data quality is weakCan the team trust the analysis enough to act?Improves assumptions, ranges, evidence, expert judgment, or sensitivity analysis before overcommitting.Presents a precise number that hides uncertainty.
A response creates new exposureWhat residual or secondary risk remains?Updates ownership, triggers, contingency, reserves, and reporting after the response choice.Assumes the response closes the risk permanently.
A risk is materializingIs this still a risk, or has it become an issue?Uses triggers, contingency plans, escalation paths, and issue management appropriately.Keeps monitoring after action is already required.
Reporting pressure increasesWhat does the decision maker need next?Communicates trend, exposure, response status, threshold breach, and recommended action.Sends a long register extract without decision context.

How PMI-RMP differs from similar routes

If you are deciding between…Main distinction
PMI-RMP vs PMPPMI-RMP is specialist risk depth; PMP is broad project leadership.
PMI-RMP vs PMI-SPPMI-RMP is risk focused; PMI-SP is schedule focused.
PMI-RMP vs PMI-PMOCPPMI-RMP is uncertainty and response judgment; PMI-PMOCP is PMO governance and operating-model depth.
PMI-RMP vs AACE PRMP or DRMPPMI-RMP is PMI’s risk route; AACE PRMP is AACE’s project-risk route and AACE DRMP adds decision analysis, cost risk, schedule risk, and uncertainty depth.

Which route should you use?

If your target is closest to…Best pageWhy
PMI specialist project risk managementPMI-RMPBest fit when your exam target is PMI’s risk credential.
General PMI project leadershipPMPBetter fit when risk is one part of a broader PM leadership exam.
Specialist scheduling and schedule controlPMI-SPBetter fit when schedule model quality, progress, and schedule control are the main target.
PMO governance and operating modelPMI-PMOCPBetter fit when your work centers on PMO design, oversight, governance, and service delivery.
Project-controls risk or decision analysisAACE PRMP or AACE DRMPBetter fit when your market expects AACE project-controls risk credentials.

PMI-RMP readiness map

Use this map after each missed question. The fastest improvement usually comes from identifying the risk-management decision pattern behind the miss.

DomainWhat the exam is really testingWhat PM Mastery practice should force you to decideCommon wrong-answer trap
Risk Strategy and PlanningWhether the risk approach is aligned with objectives, appetite, governance, and stakeholder expectationsWhich thresholds, roles, methods, reserves, reporting rules, and escalation paths make the risk process usableWriting a generic risk plan that does not guide decisions
Risk IdentificationWhether the team finds meaningful threats and opportunities early enough to actHow to form clear risk statements, use the right identification technique, and include the right stakeholdersListing symptoms, issues, or vague concerns as if they were actionable risks
Risk AnalysisWhether exposure is assessed with enough quality to support decisionsWhen to use qualitative ranking, quantitative analysis, sensitivity, scenarios, expected value, or expert judgmentTreating a precise-looking number as reliable when assumptions are weak
Risk ResponseWhether the chosen response changes exposure in a useful and governed wayWho owns the response, what trigger activates it, what residual risk remains, and what secondary risk appearsChoosing a response label without checking cost, timing, ownership, or side effects
Monitor and Close RisksWhether risk work stays current as the project changesWhen to update, escalate, close, transfer to issue management, or communicate trend changesLeaving stale risks open or closing risks before response evidence exists

How to use the PMI-RMP simulator efficiently

  1. Start with risk strategy and planning so appetite, thresholds, and escalation rules become usable decision filters.
  2. Practice identification and analysis together, because many misses come from weak risk statements, poor assumptions, or the wrong analysis method.
  3. Review response questions until you can explain residual risk, secondary risk, trigger timing, ownership, and escalation.
  4. Finish with timed mixed sets so you can make risk decisions quickly without treating the exam like a vocabulary list.

Final 7-day PMI-RMP practice sequence

Use this once you already understand the five domains and need to convert practice into exam readiness.

WindowWhat to doWhat not to do
Days 7-5Take one mixed timed set or the free full-length diagnostic, then classify misses by strategy, identification, analysis, response, monitoring, or timing.Do not count a reviewed explanation as fixed until you can name the decision rule you missed.
Days 4-3Drill the weakest domain, especially identification-analysis handoffs and response ownership if your misses are scenario-based.Do not chase random volume if the same risk-decision pattern keeps failing.
Days 2-1Take a shorter mixed timed set and review only recurring traps: vague risk statements, weak assumptions, stale registers, threshold breaches, and residual/secondary risk.Do not start a new large question run late enough that fatigue becomes the main result.
Exam dayRead for the required action, identify whether the prompt is asking about risk, issue, response, escalation, or communication, then eliminate answers that bypass governance.Do not pick an answer just because it uses a familiar artifact name.

When PMI-RMP practice is enough

The goal is not to memorize every possible risk-management wording. The goal is to build transferable judgment so you can handle new uncertainty scenarios under time pressure.

If you can complete several varied timed attempts at 75% or higher, explain why the best answers are stronger than the tempting alternatives, and consistently identify residual risk, secondary risk, triggers, owners, and escalation rules, it is usually time to schedule or sit the real exam rather than repeating recognized questions indefinitely. More practice still helps when it targets a weak domain, but repeating questions you already remember can inflate confidence without improving risk judgment.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the PMI-RMP question style and explanation depth.
  • Premium: the full PMI-RMP practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .

24 PMI-RMP sample questions with detailed explanations

These sample questions cover the five PMI-RMP domains modeled in PM Mastery. Use them to check your readiness here, then continue in PM Mastery with mixed sets, topic drills, and timed mocks.

These are original PM Mastery practice questions. They are not PMI exam items, are not copied from any exam sponsor, and should be used to practice risk-decision patterns rather than memorize exact wording.

Question 1

Topic: Domain V: Monitor and Close Risks

A predictive data-center migration project has a rule in its risk management plan: if an approved risk response moves a baseline milestone by more than 10 days, the decision must be handled through change control. A Monte Carlo reforecast shows the main schedule risk now pushes the P80 completion date 14 days beyond the baseline. The steering committee approves adding a test cycle and moving the milestone. What should the project manager update next?

  • A. Update the project management plan and change log.
  • B. Update only the risk register and risk report.
  • C. Close the risk and add a lesson learned.
  • D. Update the issue log because the risk has triggered.

Best answer: A

Explanation: The Monte Carlo result exceeds the project’s explicit 10-day threshold, and governance approved a response that moves a baseline milestone. That means the decision now affects project control, so the project management plan must be updated and the approval recorded in the change log. When monitoring risks, analysis outputs such as a Monte Carlo forecast are used to see whether exposure has crossed a control threshold. Here, the P80 date is 14 days beyond the baseline, which is greater than the stated 10-day rule, and the steering committee has already approved a response that changes the milestone.

Once a risk decision changes how the project will be controlled, the outcome must be reflected in controlled project documents. That means updating the relevant part of the project management plan and recording the approved decision in the change log. Risk artifacts may also be refreshed, but they do not replace formal control-document updates.

The key distinction is that this is not just new risk information; it is an approved change affecting project control.


Question 2

Topic: Domain III: Risk Analysis

A project risk manager plots aggregate risk exposure scores from the last five monthly reviews—42, 48, 55, 61, and 66—to determine whether overall project exposure is increasing or decreasing. Which risk analysis method is being used?

  • A. Sensitivity analysis
  • B. Monte Carlo simulation
  • C. Trend analysis
  • D. Probability-impact matrix

Best answer: C

Explanation: Trend analysis uses risk data from multiple time periods to detect direction of change. Because the manager is comparing successive exposure scores to see whether project risk is rising or falling, this is trend analysis. Trend analysis examines risk metrics over time to identify patterns such as increasing, decreasing, or stable exposure. In the stem, the manager is not trying to rank a single risk, model a range of possible outcomes, or find the biggest driver of uncertainty. The goal is to compare repeated exposure measurements across monthly reviews and determine the direction of overall project risk exposure.

  • Use consistent data points from multiple periods.
  • Look for direction and pattern over time.
  • Apply the result to forecasting, escalation, or response adjustment.

The key distinction is that trend analysis answers a time-based question: is exposure getting better or worse?


Question 3

Topic: Domain IV: Risk Response

A project team selected a mitigation strategy for a vendor-capacity risk by prequalifying a backup supplier. One month later, the risk owner reviews actual lead-time data and compares the remaining schedule exposure with the project’s risk threshold to determine whether the strategy worked as intended. Which concept does this describe?

  • A. Assess response effectiveness
  • B. Communicate response results
  • C. Monitor risk triggers
  • D. Identify secondary risks

Best answer: A

Explanation: This is assessment of response effectiveness because the risk owner is checking evidence that the mitigation changed exposure as intended. Comparing actual lead-time results with the risk threshold goes beyond action completion and tests whether the strategy is working. In PMI-RMP terms, assessing response effectiveness means evaluating whether an implemented response produced the expected change in risk exposure. A mitigation response is not considered effective just because the action was performed; it must show evidence that the threat’s probability, impact, or overall exposure was reduced to an acceptable level or improved as planned. In the stem, the risk owner uses actual lead-time data and compares the remaining schedule exposure with the stated threshold, which is exactly how effectiveness should be judged.

  • Confirm the response was implemented.
  • Measure the actual effect on exposure.
  • Compare the result with the intended strategy and threshold.
  • Decide whether more action, adjustment, or escalation is needed.

Simply reporting status or watching for triggers does not by itself prove the response worked.


Question 4

Topic: Domain II: Risk Identification

A public-sector data migration project must go live by September 30 to meet a regulation. Organizational policy requires the team to use a shared performance test environment that is available only on Fridays. During constraint analysis, which statement best connects this constraint to the project objective affected by the resulting risk?

  • A. Friday-only test access is assumed to be adequate for the planned testing effort.
  • B. Friday-only test access has already delayed performance testing by two weeks.
  • C. Friday-only test access may delay performance testing and threaten the September 30 go-live date.
  • D. Friday-only test access should be offset by outsourcing additional testing.

Best answer: C

Explanation: The best choice converts the constraint into a risk statement tied to a project objective. Limited test-environment access is the constraint, possible testing delay is the risk, and the fixed go-live date shows that schedule is the objective affected. Constraint analysis identifies how a project restriction can create uncertainty for one or more objectives. In this scenario, access to the performance test environment only on Fridays is a constraint. The resulting risk is that testing may take longer than planned, and the affected objective is schedule because the project has a fixed regulatory go-live date.

A strong risk statement links three things clearly:

  • the constraint
  • the uncertain consequence
  • the project objective affected

Statements framed as accepted assumptions do not identify risk exposure. Statements saying the delay has already occurred describe an issue, not a risk. Statements proposing outsourcing are response ideas, which come after the risk has been identified clearly.


Question 5

Topic: Domain I: Risk Strategy and Planning

A hybrid product-release project has an approved risk strategy: the organization has a high appetite for technical experimentation, but any forecast of more than 7% schedule slippage on a committed release must be escalated for response review. During planning, the product owner wants to keep an unstable interface because “our appetite supports innovation.” Current risk analysis forecasts 9% schedule slippage if the interface is retained. What should the risk manager emphasize to align stakeholders with the agreed strategy?

  • A. Wait for an interface-failure trigger before escalating.
  • B. Treat the possible schedule slip as a current issue.
  • C. Rely on innovation appetite instead of the threshold.
  • D. Use the 7% threshold to escalate and review responses.

Best answer: D

Explanation: The best choice uses the agreed decision rule in the risk strategy. Risk appetite describes general willingness to accept uncertainty, but the threshold is the actionable limit; since the forecasted slippage exceeds 7%, stakeholders should escalate and review responses. To lead stakeholders to adopt an agreed risk strategy, anchor the discussion in the documented rule that governs action. Risk appetite is a broad statement about how much uncertainty the organization is generally willing to accept, while a risk threshold converts that attitude into a specific decision point. In this case, stakeholders may support experimentation, but the forecasted 9% schedule slippage exceeds the approved 7% threshold, so the agreed strategy calls for escalation and response review.

Waiting for a trigger confuses a warning sign with a decision limit, and calling the uncertainty an issue is incorrect because the harmful event has not occurred. The right leadership move is to steer the group back to the preapproved threshold so decisions follow the strategy rather than personal preference.


Question 6

Topic: Domain V: Monitor and Close Risks

A hybrid ERP project used an offshore data-migration vendor to reduce a schedule threat. That response created a recorded secondary risk: handoff delays could keep critical defects open past the release window. The risk register names the vendor manager as owner, sets a trigger of more than four critical defects older than 48 hours, and the sponsor’s threshold is no more than a 3-day release slip. In the last two releases, the trigger was breached and the planned daily overlap meeting was not implemented consistently. What should the project manager do next?

  • A. Close the secondary risk because the original schedule threat is controlled.
  • B. Have the vendor manager implement the planned overlap response now and update risk artifacts.
  • C. Wait for the next monthly review to verify the trend.
  • D. Stop monitoring the secondary risk and manage only the current defects.

Best answer: B

Explanation: This is a monitoring problem, not a new planning exercise. The secondary risk already has an owner, trigger, and response, and current data shows both a breached trigger and weak response implementation, so the project manager should drive execution and update risk information now. Secondary risks must be monitored after a primary response creates them. In this case, the secondary risk was already identified and documented with a named owner, a trigger, and a planned response. Monitoring data now shows two important facts: the trigger has been breached in two releases, and the response action itself has not been implemented consistently. That means the project manager should focus on response execution and current exposure by ensuring the owner carries out the planned action and by updating the risk register and risk reporting.

Waiting for another review delays action despite a clear trend, and closing the risk ignores active exposure. Managing only the open defects treats part of the situation as issues but misses the still-uncertain risk of release delay.


Question 7

Topic: Domain III: Risk Analysis

During qualitative risk analysis for a hospital billing-system upgrade, the team reviews this threat: the vendor may deliver the claims rules engine 3 weeks late. Project objectives are to go live by July 1, stay within budget, and meet a new billing regulation effective that same day. Which impact statement best reflects an objective-level impact?

  • A. Go-live could miss July 1 compliance and add testing cost.
  • B. Steering committee members may doubt the vendor’s reliability.
  • C. Operations managers would rather not move training dates.
  • D. End users may dislike waiting longer for the new workflow.

Best answer: A

Explanation: Objective-level impact statements describe how a risk could affect a project objective such as schedule, cost, scope, quality, or compliance. The best choice connects the late vendor delivery to missing the regulatory date and increasing project cost. In risk analysis, impact should be expressed against project objectives, not as a vague concern or stakeholder preference. Here, the risk event is a late delivery from the vendor. The meaningful impact is whether that delay affects the project’s required outcomes: the July 1 go-live, compliance with the new regulation, and budget performance. The keyed statement does exactly that by linking the threat to a missed compliance date and extra testing cost. By contrast, doubts about the vendor, reluctance to move training, or user dissatisfaction may be relevant concerns, but they are not objective-level impacts unless they are translated into measurable effects on project objectives.


Question 8

Topic: Domain IV: Risk Response

On a hybrid CRM rollout, the risk register names the integration manager as risk owner and the vendor architect as response action owner for a threat that middleware defects could delay go-live. The steering committee has low appetite for schedule delay. Governance says risks with residual delay above 10 days remain open, and only above 20 days are escalated. The prototype mitigation was completed, but testing still shows a likely 15-day delay. What should the project manager do?

  • A. Close the risk since the mitigation task is complete.
  • B. Transfer risk ownership to the vendor architect who executed mitigation.
  • C. Escalate to the sponsor and combine risk and action ownership.
  • D. Keep the integration manager as risk owner and reassess residual risk.

Best answer: D

Explanation: Completing a response action does not close or transfer the risk itself. Because the residual exposure is still above the 10-day threshold, the risk stays open and the named risk owner must reassess remaining exposure and determine next steps. This scenario tests the difference between overall risk accountability and task accountability. The risk owner is responsible for monitoring the full risk, evaluating whether the response reduced exposure enough, and deciding whether more response, continued monitoring, or escalation is needed. The response action owner is responsible only for executing a specific assigned action. Here, the prototype mitigation was completed, but the remaining threat is still a likely 15-day delay. That is above the stated 10-day threshold, so the risk should remain open, but it is below the 20-day escalation trigger. The right action is to record the mitigation task as completed and have the integration manager, as risk owner, reassess residual exposure and update the response. Finishing a response task is not the same as owning or closing the risk.


Question 9

Topic: Domain II: Risk Identification

A hybrid project team is identifying risks for a product release. Which situation should be recorded as one risk that includes both threat and opportunity aspects?

  • A. The security review failed yesterday and blocked deployment.
  • B. Using the vendor’s new API may speed integration but also increase rework.
  • C. The supplier may offer a volume discount if demand increases.
  • D. Customs delays may postpone a critical hardware shipment.

Best answer: B

Explanation: A risk has both threat and opportunity aspects when the same uncertain condition can help one objective and hurt another. Using the new API could accelerate integration yet increase rework, and it is still uncertain, so it fits that mixed classification. The key distinction is whether one uncertain event or condition can produce both beneficial and adverse effects. Adopting the vendor’s new API is still uncertain, so it is an individual risk, and it may improve schedule by speeding integration while also threatening schedule, cost, or quality through extra rework. That makes it a risk with both opportunity and threat aspects. By contrast, the failed security review is no longer uncertain, so it is an issue. The possible supplier discount is upside only, which makes it an opportunity. The possible customs delay is downside only, which makes it a threat. Look for one uncertainty with impacts in both directions.


Question 10

Topic: Domain I: Risk Strategy and Planning

A hybrid project includes a predictive vendor-led network rollout and an agile team delivering customer features in 2-week sprints. The risk manager is updating the risk management plan and must set a review cadence and evidence sources that fit each delivery approach. Which tailoring is best?

  • A. Use one monthly risk review and the master schedule for all risks.
  • B. Review agile risks each sprint using backlog and dependency data, review predictive risks at milestones using baseline and vendor data, and summarize both in one risk report.
  • C. Review all risks only during sprint ceremonies using team judgment.
  • D. Wait to define cadence until quantitative models exist for every major risk.

Best answer: B

Explanation: Risk-planning cadence should match how uncertainty appears in the delivery approach. Agile work needs frequent review using current iteration evidence, while predictive work can align to milestones or phases using baseline and vendor evidence. In hybrid delivery, both rhythms should be planned and then rolled up into a common view. In Risk Strategy and Planning, the risk management plan should be tailored to the delivery approach, not forced into one generic review pattern. Agile teams learn and reprioritize quickly, so risk reviews should happen at a short cadence such as each sprint, using evidence like backlog change, dependency status, and recent iteration results. Predictive work has baselines, milestones, and contractual checkpoints, so risk reviews can align to those control points and use evidence such as schedule variance, milestone status, and vendor commitments. A hybrid project needs both cadences and both evidence types, then an integrated risk report to show overall exposure.

  • Agile cadence follows iteration learning.
  • Predictive cadence follows planned control points.
  • Hybrid planning combines both and rolls up exposure.

Using one cadence or waiting for advanced analysis would weaken planning quality.


Question 11

Topic: Domain V: Monitor and Close Risks

On a hybrid ERP rollout, integration testing is 12% behind baseline. The risk management plan says any testing delay greater than 10% is a trigger to reassess the risk of missing the release window. The sponsor asks for the weekly risk update. What is the best way to report this variance?

  • A. Report the 12% delay and defer risk interpretation.
  • B. Close the release risk and log only a schedule issue.
  • C. Explain the trigger breach, higher release risk, and response status.
  • D. Wait for a full Monte Carlo rerun before reporting.

Best answer: C

Explanation: Variance data alone is not enough in risk monitoring. Because the 12% testing delay exceeds the 10% trigger, the update should explain that release-window exposure has increased and note the current response status. In risk monitoring, variance becomes useful only when it is connected to risk significance. Here, the testing delay is not just a schedule metric; it breached a predefined trigger in the risk management plan. That means the related risk of missing the release window should be reassessed and communicated in terms of changed exposure and current response progress. A strong risk update turns performance data into decision-ready information for stakeholders.

  • Compare the variance against the stated trigger or threshold.
  • Identify which risk is affected.
  • Explain how exposure and response status changed.

Simply stating that testing is 12% behind describes performance, but it does not explain what that means for project risk.


Question 12

Topic: Domain III: Risk Analysis

During qualitative risk analysis for a hybrid CRM project, the team uses this excerpt from the risk management plan:

Qualitative classification rules
- Score each risk as Probability (1-5) × highest impact on one project objective (1-5)
- Priority bands: 1-5 Low, 6-14 Medium, 15-25 High
- If the trigger is expected within 30 days, urgency may raise one band
- Stakeholder risk appetite guides response discussions only

Which proposed classification is INCONSISTENT with the plan?

  • A. A supplier delay threat scored 16 and was classified High.
  • B. A cost overrun threat scored 6 and was classified High because the sponsor’s appetite is low.
  • C. A testing rework threat scored 6, with a trigger in 12 days, and was classified High.
  • D. A data quality threat scored 4 and was classified Low.

Best answer: B

Explanation: The inconsistent classification is the one that labels a score of 6 as High only because the sponsor has low risk appetite. The plan requires classification by probability-impact bands, with urgency as the only stated reason to move a risk up one band. Qualitative risk classification must follow the criteria defined in the risk management plan. In this scenario, the plan says to classify risks by multiplying probability and the highest single-objective impact, then mapping that score to Low, Medium, or High. It also allows urgency to raise a risk one band when the trigger is expected within 30 days. The plan explicitly separates stakeholder risk appetite from classification, reserving appetite for later response discussions. That means a score of 6 cannot be reclassified as High just because a sponsor dislikes cost overruns; that would replace the approved classification method with an ad hoc preference. The key point is to use the plan’s thresholds consistently and not substitute other risk concepts for classification rules.


Question 13

Topic: Domain IV: Risk Response

A hybrid product launch project has an identified risk that an external data feed may arrive late. If the file is more than 48 hours late, the agreed contingency is to load a reduced internal dataset. During the current sprint, the shared test environment has already failed, and the team can use a cloud sandbox as a workaround. On this project, risk owners monitor exposure, issue owners manage active issues, and action owners execute agreed response actions. Which responsibility assignment is most appropriate?

  • A. Assign the risk owner to the contingency and the project manager to the workaround.
  • B. Assign the sponsor to both actions because delivery dates are affected.
  • C. Assign the same risk owner to both because both are technical.
  • D. Assign an action owner to the contingency and the issue owner to the workaround.

Best answer: D

Explanation: The late data feed is still uncertain, so its contingency should be executed by an assigned action owner if the trigger occurs. The failed test environment is already an issue, so the workaround belongs under issue management, not risk ownership. Match the responsibility to both the type of event and the type of role. A contingency is a preplanned response for a risk that may occur later, so the risk owner continues to monitor the exposure and trigger, while an action owner is assigned to carry out the contingency when needed. A workaround is different because the event has already happened; it is now an issue that needs immediate management. That makes the issue owner responsible for executing or coordinating the workaround. Assigning both items to a sponsor, project manager, or a single risk owner blurs the distinction between monitoring risk and resolving an active issue. The key discriminator is risk versus issue, then execution responsibility versus oversight.


Question 14

Topic: Domain II: Risk Identification

A hybrid ERP rollout relies on several documented assumptions: the vendor API will remain stable, key users will be available for UAT, and infrastructure changes will be approved within 10 days. The team already has a schedule-cost model and wants to know which failed assumption would drive the biggest combined effect on project objectives. Which analysis approach is best?

  • A. RBS classification
  • B. Probability-impact assessment
  • C. Risk audit
  • D. Sensitivity analysis

Best answer: D

Explanation: Sensitivity analysis is best when several uncertain assumptions could affect multiple objectives and the team wants to know which one matters most. Here, the goal is to identify the strongest driver of schedule and cost impact, not just categorize or generally rate the risks. Assumption analysis starts by testing whether key planning assumptions could be false and what risks would result. When several assumptions are already identified and the team wants to know which failure would most strongly change project outcomes, sensitivity analysis is the best fit because it shows which uncertain input drives variation in objectives such as schedule and cost.

In this scenario, the project already has a schedule-cost model, so the team can examine how changes in each assumption affect overall results. That directly reveals the cascade from an assumption failure to project-objective impacts and helps focus response planning on the most influential assumption. A simple probability-impact rating is useful for prioritization, but it does not isolate the dominant driver across modeled assumptions.


Question 15

Topic: Domain I: Risk Strategy and Planning

A hybrid data-platform rollout is defining its risk management plan. The governance board wants one monthly metric that shows whether overall project risk exposure is increasing or decreasing so it can decide when to add contingency funding or direct stronger responses. The risk register already records probability and impact scores for each risk. Which metric should the team define?

  • A. Monthly count of open risks
  • B. Count of risks by RBS category
  • C. Percentage of planned response actions completed
  • D. Trend of total weighted risk exposure over time

Best answer: D

Explanation: The needed metric must show overall exposure, not just volume, classification, or activity. Tracking the trend of total weighted risk exposure uses the existing probability-impact data to support monitoring and management decisions. When selecting risk metrics, the best choice is the one that directly supports the decision being made. In this case, leadership wants to know whether overall project risk exposure is trending up or down so it can adjust contingency and response intensity. Because the risk register already contains probability and impact scores, the team can aggregate those values and monitor the total over time. That provides a project-level view of changing exposure across review periods. A simple count of risks ignores severity, RBS grouping shows source categories rather than exposure level, and response completion tracks effort rather than effectiveness. A trend in total weighted exposure is the most decision-useful metric here because it links risk data to action.


Question 16

Topic: Domain V: Monitor and Close Risks

A hybrid claims-system project is entering closeout. Overall project risk exposure has declined for three months, eight risks have passed their trigger dates, and two responses reduced exposure below the project’s threshold. One residual integration risk remains above threshold and will transfer to operations. The sponsor has low tolerance for unresolved exposure and wants a credible final risk narrative. What is the BEST action?

  • A. Validate closure evidence, update the risk register and lessons learned, and report the transferred residual risk.
  • B. Close every risk at project closeout and hand concerns to operations.
  • C. Repeat qualitative analysis for all risks before drafting the final summary.
  • D. Report only the remaining residual risk to keep the final narrative concise.

Best answer: A

Explanation: Risk closure should be evidence-based, not automatic at project end. The project should close only the risks that truly met closure criteria, capture response effectiveness and lessons learned, and keep the above-threshold residual risk visible in the final risk report and handoff. The key concept is that risk closure supports a trustworthy end-of-project risk story only when it distinguishes truly closed risks from remaining exposure. In this scenario, several risks can be closed because their triggers expired or their responses reduced exposure below the agreed threshold, but the residual integration risk is still above threshold and the sponsor has low tolerance for unresolved exposure. That means the project should document closure evidence in the risk register, record what worked for organizational learning, and explicitly report the transferred residual risk in the final risk narrative.

  • Close risks only when closure criteria are met.
  • Record response effectiveness and lessons learned.
  • Formally hand off and report above-threshold residual exposure.

The closest distractor mentions the residual risk, but it omits the closure evidence and learning needed for a credible final narrative.


Question 17

Topic: Domain III: Risk Analysis

In a hybrid product launch, a delay in supplier firmware could postpone system testing, compress end-user training, and force premium shipping for launch hardware. The risk team believes a simple probability-impact score understates this threat because the downstream effects may combine across schedule and cost. Which analysis approach best evaluates this cascading complexity?

  • A. Categorize the threat in the risk breakdown structure
  • B. Re-score it with the probability-impact matrix
  • C. Perform sensitivity analysis on firmware delivery timing
  • D. Run Monte Carlo on the integrated schedule-cost model

Best answer: D

Explanation: Cascading complexity means the initial threat can ripple into several dependent impacts, not just one isolated consequence. Monte Carlo simulation is best here because it evaluates combined uncertainty across the integrated plan and shows how multiple downstream schedule and cost effects may accumulate. Cascading effects make a threat more complex when one uncertain event can propagate into additional impacts or secondary risks across different project objectives. In this scenario, the firmware delay can affect testing, training, and shipping, so treating it as one simple rating would flatten the chain of consequences. Monte Carlo simulation is the best fit because it analyzes uncertainty across the integrated schedule-cost model and estimates the range of total project outcomes when those effects interact.

  • Use RBS classification to organize source categories, not to measure ripple effects.
  • Use probability-impact scoring for quick prioritization, not for multi-step combined exposure.
  • Use sensitivity analysis to see which variable drives results most, but it does not represent the full distribution of cascading impacts as well as simulation.

When a threat can ripple through connected work, prefer analysis that captures interaction across the whole plan.


Question 18

Topic: Domain IV: Risk Response

A hybrid project faces a threat that a critical component may arrive late. The agreed contingency is to buy from a backup supplier if the primary vendor slips by 5 days. During response planning, procurement confirms the backup source would raise project cost by 4%. The sponsor’s approved cost risk threshold is 3%, and finance must be notified if any planned response exceeds it. What should the project risk manager do next?

  • A. Reevaluate the contingency now and notify sponsor and finance.
  • B. Wait for the 5-day trigger, then use the backup supplier.
  • C. Transfer the supplier risk to procurement.
  • D. Convert the contingency into a workaround and log it as an issue.

Best answer: A

Explanation: Response planning can show that the response itself changes exposure or crosses an approved threshold. Because the backup supplier would exceed the sponsor’s 3% cost threshold, the response must be reevaluated now and the required stakeholders informed rather than waiting for the trigger. In risk response, you do not wait for a trigger if response planning already shows that the planned action conflicts with approved thresholds or communication rules. Here, the threat has not occurred, but analysis of the contingency shows a 4% cost impact against a 3% threshold. That means the response needs immediate reevaluation, and the required stakeholders must be informed now.

  • Confirm whether the sponsor still accepts this contingency.
  • Notify finance because the communication rule has already been met.
  • Update the risk register and any related communication or assumption records.

The closest trap is waiting for the vendor-slip trigger, but the governance threshold has already been crossed by the proposed response.


Question 19

Topic: Domain II: Risk Identification

A project manager is performing assumption analysis for a hybrid CRM rollout. The team has documented:

  • The steering committee approved the release sequence last week.
  • The vendor contract signed on March 10 includes data migration support.
  • Infrastructure testing confirmed the production environment meets capacity needs.
  • Sales managers will free key users for user acceptance testing in July.

Which statement should be treated as an assumption rather than a confirmed project fact?

  • A. Steering committee approved the release sequence last week
  • B. Signed vendor contract includes data migration support
  • C. Key users will be available for July acceptance testing
  • D. Production environment capacity was confirmed by testing

Best answer: C

Explanation: An assumption is something the team is currently treating as true for planning but has not yet verified. The statement about key-user availability in July depends on a future commitment, while the approval, signed contract term, and completed test result are already supported by evidence. In assumption analysis, the team separates verified information from statements that are still uncertain. A confirmed fact is backed by current evidence, such as an approval already given, a contract already signed, or a test result already recorded. An assumption is different: it is a planning basis that may be true, but it still depends on future conditions or validation.

Here, the statement about key users being available in July is forward-looking and could change because of operational priorities, staffing limits, or schedule conflicts. That makes it an assumption and a potential source of risk if it proves false. The other statements describe events or evidence that already exist, so they should be treated as facts, not assumptions.

A useful check is simple: if the team can prove it now, it is a fact; if it still depends on something happening later, it is an assumption.


Question 20

Topic: Domain I: Risk Strategy and Planning

A hybrid project manager is tailoring the risk management plan for a hospital data-migration project. During stakeholder interviews, which statement most clearly means the plan must add a measurable risk threshold and escalation rule?

  • A. The operations lead says any risk threatening cutover by more than 5 days must be escalated within 24 hours.
  • B. The sponsor says the organization is cautious about launch-date risk.
  • C. The architect says a failed interface test would warn of integration trouble.
  • D. The vendor manager says current defects are already delaying system testing.

Best answer: A

Explanation: Stakeholder expectations create added planning requirements when they must be translated into explicit rules for handling future uncertainty. The statement about escalating any risk that could delay cutover by more than 5 days within 24 hours defines both a measurable threshold and a required escalation path. In risk strategy and planning, stakeholder expectations matter most when they establish how risks will be judged, prioritized, and escalated. A requirement to escalate any risk that could delay cutover by more than 5 days within 24 hours is not just a preference; it creates a specific planning rule. The project team should capture that threshold and escalation timing in the risk management plan so risks are assessed and reported consistently.

By contrast, a general statement of caution reflects risk appetite, an early warning sign is a trigger, and a problem already delaying testing is an issue that needs issue management. The key distinction is whether the stakeholder expectation defines a measurable rule for future risk handling.


Question 21

Topic: Domain V: Monitor and Close Risks

A hybrid ERP project has a fixed regulatory go-live date. The risk management plan states that any critical-path schedule variance above 5% must be reported with its effect on overall risk exposure and recommended action. This month, verified data shows integration work moved from -2% to -6% after a vendor delay trigger occurred, and the sponsor has low appetite for date risk. What is the BEST action?

  • A. Send the variance chart now and discuss implications later.
  • B. Wait for another reporting cycle before escalating the variance.
  • C. Update the risk report to explain the date-risk exposure, threshold breach, and response-owner actions.
  • D. Rebaseline the integration work package to remove the breach.

Best answer: C

Explanation: Variance is only useful for risk monitoring when its significance is explained. Here, the threshold is breached, the trigger is confirmed, and the sponsor is risk-averse about the date, so the report should connect the variance to increased schedule exposure and the response status. In Monitor and Close Risks, variance is performance data, not the full risk message. When verified variance crosses a stated threshold, the risk manager should explain what that change means for project objectives, link it to the relevant risk, and show whether the current response is adequate. In this scenario, integration variance moved past the 5% limit, the vendor-delay trigger has occurred, and the sponsor has low tolerance for any threat to the regulatory go-live date. The best action is to update the risk report so stakeholders see the increased schedule exposure, the breached threshold, and the response-owner actions or escalation needed.

  • Verify the variance data.
  • Link it to the triggered risk.
  • State the effect on overall exposure.
  • Report response status and next action.

Sending variance without significance leaves governance to guess the risk impact.


Question 22

Topic: Domain III: Risk Analysis

A PMI-RMP is deciding which identified uncertainty on a hybrid hospital project should receive quantitative analysis instead of only qualitative analysis. Which situation shows the highest risk complexity?

  • A. A network supplier may deliver one week late, and the schedule has three weeks of float.
  • B. Permit timing uncertainty could shift testing, training, opening, and revenue dates; modeled ranges and response costs are available.
  • C. An equipment vendor may offer an early-order discount that would affect procurement cost only.
  • D. A failed test server has already stopped integration testing for two days.

Best answer: B

Explanation: Quantitative analysis is most appropriate when a risk is complex, materially affects several objectives, and can be modeled with credible data. The permit-timing uncertainty creates interconnected schedule and financial effects, and the stem states that usable ranges and response-cost data exist. Before choosing an analysis method, assess whether the risk is simple or complex. Quantitative analysis is justified when uncertainty has major potential impact, interdependencies across several project objectives, and enough numerical information to model likely outcomes. In the permit scenario, one uncertain event can ripple through testing, training, opening, and revenue start, so the project team needs more than a qualitative ranking; it needs forecasted exposure to support response decisions.

By contrast, a one-week supplier delay with ample float is a bounded threat that can usually be handled with qualitative assessment first. A possible vendor discount affects mainly one cost element, so it is a simpler opportunity. A server outage that has already happened is an issue, not a risk for analysis selection. The key discriminator is complexity plus modelable uncertainty.


Question 23

Topic: Domain IV: Risk Response

A hybrid release planned to address deployment outage risk by switching to a standby vendor if the primary vendor failed cutover testing. The primary vendor fails testing, but the standby vendor is unavailable, and no other response was documented. The team creates a temporary phased rollout to reduce business disruption. Which concept best matches the phased rollout?

  • A. A contingency plan
  • B. A fallback plan
  • C. A workaround
  • D. An issue escalation

Best answer: C

Explanation: This is a workaround because the team had to invent a new response after the risk event occurred and the planned vendor-switch response was unavailable. Workarounds are used when no usable planned response remains or the original response proves insufficient. Workarounds are unplanned responses created after a risk event has occurred when the intended response cannot be used or does not adequately control the impact. In the scenario, the team did have a planned response—switching to a standby vendor—but that option was unavailable, and no other documented alternative existed. Creating a phased rollout at that moment is therefore a workaround. A contingency plan or fallback plan would have been prepared in advance and linked to trigger conditions in the risk register. Issue escalation may still happen if the team lacks authority, but escalation is not the name of the improvised response itself. The key clue is that the team had to devise a new action in real time.


Question 24

Topic: Domain II: Risk Identification

During a risk identification workshop for a public-sector software rollout, the team is classifying new risks by origin in the risk register. Which risk should be tagged as having an external origin?

  • A. The sponsor may defer a key scope decision.
  • B. The regulator may revise data-retention rules.
  • C. The team may underestimate migration effort.
  • D. The test environment may not be ready on time.

Best answer: B

Explanation: Risk origin identifies where the uncertainty comes from. A possible change in rules by a regulator is external because the source is outside the project team and its direct control. In risk identification, the team records not just the risk itself but also its origin or category in the risk register. This helps with analysis, ownership, and response planning. An external-origin risk comes from outside the project’s direct environment, such as regulators, suppliers, market conditions, or weather. An internal-origin risk comes from the project team, governance, estimates, resources, methods, or internal infrastructure.

A possible revision to data-retention rules by a regulator is external because the uncertainty is created by an outside authority. The other choices come from internal decision-making, team estimating, or internal readiness. The key distinction is the source of the uncertainty, not how large the impact might be.

Official sources

What to open next

  • Need the broader PMI leadership route? Open PMP .
  • Need the scheduling specialist route instead? Open PMI-SP .
  • Need the AACE risk route instead? Open AACE PRMP or AACE DRMP .
  • Need the broader PMI family map? Open the PMI hub .

Focused sample questions

Use these child pages when you want focused PM Mastery practice before returning to mixed sets and timed mocks.

In this section

Revised on Friday, May 15, 2026