Practice PMI-RMP with free sample questions, timed mock exams, and detailed explanations in PM Mastery.
PMI-RMP is PMI’s risk-management certification for practitioners who turn uncertainty into clearer decisions through thresholds, analysis, response planning, and disciplined monitoring. If you are searching for PMI-RMP sample questions, a practice test, mock exam, or simulator, this is the main PM Mastery page to start on web and continue on iPhone or Android with the same PM Mastery account.
Use this page when risk management is a primary professional responsibility rather than one knowledge area inside a broader PM credential. PMI-RMP is the stronger fit when you need deeper practice with risk appetite, thresholds, identification, qualitative and quantitative analysis, responses, triggers, residual risk, secondary risk, reporting, and closure.
Start a practice session for PMI Risk Management Professional (PMI-RMP) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.
Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same PM Mastery account you use on web. The same PM Mastery subscription works across web and mobile.
Free diagnostic: Try the 115-question PMI-RMP full-length practice exam before subscribing. Use it as one risk-decision baseline, then return to PM Mastery for timed mocks, topic drills, explanations, and the full PMI-RMP question bank.
For the latest official exam details and requirements, see: https://www.pmi.org/certifications/risk-management-rmp
The snapshot below summarizes PMI’s current PMI-RMP certification page and the PMI-RMP Exam Content Outline and Specifications, updated May 2022. Check PMI directly before booking because exam details can change.
Official source check: Last checked May 5, 2026 against PMI's public PMI-RMP certification page.
PMI's public page lists 115 questions, 150 minutes, English/Arabic/Chinese (Simplified) language availability, and the five weighted risk domains used below. Confirm current appointment rules and eligibility directly with PMI before booking.
PMI-RMP questions usually reward the option that makes risk work more decision-ready by clarifying thresholds, strengthening analysis quality, and assigning actionable responses rather than documenting uncertainty passively.
| Domain | Weight | Target scored items (out of 100) |
|---|---|---|
| Risk Strategy and Planning | 22% | 22 |
| Risk Identification | 23% | 23 |
| Risk Analysis | 23% | 23 |
| Risk Response | 13% | 13 |
| Monitor and Close Risks | 19% | 19 |
Strategy, identification, and analysis together drive most of the exam. The simulator reflects that by giving you repeated practice with thresholds, assumptions, stakeholder risk attitudes, risk data quality, exposure trends, and the next best risk-management action.
Use these filters when two answers both sound reasonable. PMI-RMP questions often reward the answer that turns uncertainty into a governed decision, not the answer that simply names a risk artifact.
| Scenario signal | First check | Strong answer usually… | Weak answer usually… |
|---|---|---|---|
| Stakeholders disagree about exposure | Which threshold, appetite, or tolerance applies? | Reconnects the discussion to approved risk criteria and documents the decision basis. | Treats the loudest stakeholder as the decision rule. |
| A new threat or opportunity appears | Is the risk statement clear enough to analyze? | Clarifies cause, event, effect, owner, assumptions, and timing before choosing a response. | Jumps straight to mitigation without defining the risk. |
| Data quality is weak | Can the team trust the analysis enough to act? | Improves assumptions, ranges, evidence, expert judgment, or sensitivity analysis before overcommitting. | Presents a precise number that hides uncertainty. |
| A response creates new exposure | What residual or secondary risk remains? | Updates ownership, triggers, contingency, reserves, and reporting after the response choice. | Assumes the response closes the risk permanently. |
| A risk is materializing | Is this still a risk, or has it become an issue? | Uses triggers, contingency plans, escalation paths, and issue management appropriately. | Keeps monitoring after action is already required. |
| Reporting pressure increases | What does the decision maker need next? | Communicates trend, exposure, response status, threshold breach, and recommended action. | Sends a long register extract without decision context. |
| If you are deciding between… | Main distinction |
|---|---|
| PMI-RMP vs PMP | PMI-RMP is specialist risk depth; PMP is broad project leadership. |
| PMI-RMP vs PMI-SP | PMI-RMP is risk focused; PMI-SP is schedule focused. |
| PMI-RMP vs PMI-PMOCP | PMI-RMP is uncertainty and response judgment; PMI-PMOCP is PMO governance and operating-model depth. |
| PMI-RMP vs AACE PRMP or DRMP | PMI-RMP is PMI’s risk route; AACE PRMP is AACE’s project-risk route and AACE DRMP adds decision analysis, cost risk, schedule risk, and uncertainty depth. |
| If your target is closest to… | Best page | Why |
|---|---|---|
| PMI specialist project risk management | PMI-RMP | Best fit when your exam target is PMI’s risk credential. |
| General PMI project leadership | PMP | Better fit when risk is one part of a broader PM leadership exam. |
| Specialist scheduling and schedule control | PMI-SP | Better fit when schedule model quality, progress, and schedule control are the main target. |
| PMO governance and operating model | PMI-PMOCP | Better fit when your work centers on PMO design, oversight, governance, and service delivery. |
| Project-controls risk or decision analysis | AACE PRMP or AACE DRMP | Better fit when your market expects AACE project-controls risk credentials. |
Use this map after each missed question. The fastest improvement usually comes from identifying the risk-management decision pattern behind the miss.
| Domain | What the exam is really testing | What PM Mastery practice should force you to decide | Common wrong-answer trap |
|---|---|---|---|
| Risk Strategy and Planning | Whether the risk approach is aligned with objectives, appetite, governance, and stakeholder expectations | Which thresholds, roles, methods, reserves, reporting rules, and escalation paths make the risk process usable | Writing a generic risk plan that does not guide decisions |
| Risk Identification | Whether the team finds meaningful threats and opportunities early enough to act | How to form clear risk statements, use the right identification technique, and include the right stakeholders | Listing symptoms, issues, or vague concerns as if they were actionable risks |
| Risk Analysis | Whether exposure is assessed with enough quality to support decisions | When to use qualitative ranking, quantitative analysis, sensitivity, scenarios, expected value, or expert judgment | Treating a precise-looking number as reliable when assumptions are weak |
| Risk Response | Whether the chosen response changes exposure in a useful and governed way | Who owns the response, what trigger activates it, what residual risk remains, and what secondary risk appears | Choosing a response label without checking cost, timing, ownership, or side effects |
| Monitor and Close Risks | Whether risk work stays current as the project changes | When to update, escalate, close, transfer to issue management, or communicate trend changes | Leaving stale risks open or closing risks before response evidence exists |
Use this once you already understand the five domains and need to convert practice into exam readiness.
| Window | What to do | What not to do |
|---|---|---|
| Days 7-5 | Take one mixed timed set or the free full-length diagnostic, then classify misses by strategy, identification, analysis, response, monitoring, or timing. | Do not count a reviewed explanation as fixed until you can name the decision rule you missed. |
| Days 4-3 | Drill the weakest domain, especially identification-analysis handoffs and response ownership if your misses are scenario-based. | Do not chase random volume if the same risk-decision pattern keeps failing. |
| Days 2-1 | Take a shorter mixed timed set and review only recurring traps: vague risk statements, weak assumptions, stale registers, threshold breaches, and residual/secondary risk. | Do not start a new large question run late enough that fatigue becomes the main result. |
| Exam day | Read for the required action, identify whether the prompt is asking about risk, issue, response, escalation, or communication, then eliminate answers that bypass governance. | Do not pick an answer just because it uses a familiar artifact name. |
The goal is not to memorize every possible risk-management wording. The goal is to build transferable judgment so you can handle new uncertainty scenarios under time pressure.
If you can complete several varied timed attempts at 75% or higher, explain why the best answers are stronger than the tempting alternatives, and consistently identify residual risk, secondary risk, triggers, owners, and escalation rules, it is usually time to schedule or sit the real exam rather than repeating recognized questions indefinitely. More practice still helps when it targets a weak domain, but repeating questions you already remember can inflate confidence without improving risk judgment.
If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .
These sample questions cover the five PMI-RMP domains modeled in PM Mastery. Use them to check your readiness here, then continue in PM Mastery with mixed sets, topic drills, and timed mocks.
These are original PM Mastery practice questions. They are not PMI exam items, are not copied from any exam sponsor, and should be used to practice risk-decision patterns rather than memorize exact wording.
Topic: Domain V: Monitor and Close Risks
A predictive data-center migration project has a rule in its risk management plan: if an approved risk response moves a baseline milestone by more than 10 days, the decision must be handled through change control. A Monte Carlo reforecast shows the main schedule risk now pushes the P80 completion date 14 days beyond the baseline. The steering committee approves adding a test cycle and moving the milestone. What should the project manager update next?
Best answer: A
Explanation: The Monte Carlo result exceeds the project’s explicit 10-day threshold, and governance approved a response that moves a baseline milestone. That means the decision now affects project control, so the project management plan must be updated and the approval recorded in the change log. When monitoring risks, analysis outputs such as a Monte Carlo forecast are used to see whether exposure has crossed a control threshold. Here, the P80 date is 14 days beyond the baseline, which is greater than the stated 10-day rule, and the steering committee has already approved a response that changes the milestone.
Once a risk decision changes how the project will be controlled, the outcome must be reflected in controlled project documents. That means updating the relevant part of the project management plan and recording the approved decision in the change log. Risk artifacts may also be refreshed, but they do not replace formal control-document updates.
The key distinction is that this is not just new risk information; it is an approved change affecting project control.
Topic: Domain III: Risk Analysis
A project risk manager plots aggregate risk exposure scores from the last five monthly reviews—42, 48, 55, 61, and 66—to determine whether overall project exposure is increasing or decreasing. Which risk analysis method is being used?
Best answer: C
Explanation: Trend analysis uses risk data from multiple time periods to detect direction of change. Because the manager is comparing successive exposure scores to see whether project risk is rising or falling, this is trend analysis. Trend analysis examines risk metrics over time to identify patterns such as increasing, decreasing, or stable exposure. In the stem, the manager is not trying to rank a single risk, model a range of possible outcomes, or find the biggest driver of uncertainty. The goal is to compare repeated exposure measurements across monthly reviews and determine the direction of overall project risk exposure.
The key distinction is that trend analysis answers a time-based question: is exposure getting better or worse?
Topic: Domain IV: Risk Response
A project team selected a mitigation strategy for a vendor-capacity risk by prequalifying a backup supplier. One month later, the risk owner reviews actual lead-time data and compares the remaining schedule exposure with the project’s risk threshold to determine whether the strategy worked as intended. Which concept does this describe?
Best answer: A
Explanation: This is assessment of response effectiveness because the risk owner is checking evidence that the mitigation changed exposure as intended. Comparing actual lead-time results with the risk threshold goes beyond action completion and tests whether the strategy is working. In PMI-RMP terms, assessing response effectiveness means evaluating whether an implemented response produced the expected change in risk exposure. A mitigation response is not considered effective just because the action was performed; it must show evidence that the threat’s probability, impact, or overall exposure was reduced to an acceptable level or improved as planned. In the stem, the risk owner uses actual lead-time data and compares the remaining schedule exposure with the stated threshold, which is exactly how effectiveness should be judged.
Simply reporting status or watching for triggers does not by itself prove the response worked.
Topic: Domain II: Risk Identification
A public-sector data migration project must go live by September 30 to meet a regulation. Organizational policy requires the team to use a shared performance test environment that is available only on Fridays. During constraint analysis, which statement best connects this constraint to the project objective affected by the resulting risk?
Best answer: C
Explanation: The best choice converts the constraint into a risk statement tied to a project objective. Limited test-environment access is the constraint, possible testing delay is the risk, and the fixed go-live date shows that schedule is the objective affected. Constraint analysis identifies how a project restriction can create uncertainty for one or more objectives. In this scenario, access to the performance test environment only on Fridays is a constraint. The resulting risk is that testing may take longer than planned, and the affected objective is schedule because the project has a fixed regulatory go-live date.
A strong risk statement links three things clearly:
Statements framed as accepted assumptions do not identify risk exposure. Statements saying the delay has already occurred describe an issue, not a risk. Statements proposing outsourcing are response ideas, which come after the risk has been identified clearly.
Topic: Domain I: Risk Strategy and Planning
A hybrid product-release project has an approved risk strategy: the organization has a high appetite for technical experimentation, but any forecast of more than 7% schedule slippage on a committed release must be escalated for response review. During planning, the product owner wants to keep an unstable interface because “our appetite supports innovation.” Current risk analysis forecasts 9% schedule slippage if the interface is retained. What should the risk manager emphasize to align stakeholders with the agreed strategy?
Best answer: D
Explanation: The best choice uses the agreed decision rule in the risk strategy. Risk appetite describes general willingness to accept uncertainty, but the threshold is the actionable limit; since the forecasted slippage exceeds 7%, stakeholders should escalate and review responses. To lead stakeholders to adopt an agreed risk strategy, anchor the discussion in the documented rule that governs action. Risk appetite is a broad statement about how much uncertainty the organization is generally willing to accept, while a risk threshold converts that attitude into a specific decision point. In this case, stakeholders may support experimentation, but the forecasted 9% schedule slippage exceeds the approved 7% threshold, so the agreed strategy calls for escalation and response review.
Waiting for a trigger confuses a warning sign with a decision limit, and calling the uncertainty an issue is incorrect because the harmful event has not occurred. The right leadership move is to steer the group back to the preapproved threshold so decisions follow the strategy rather than personal preference.
Topic: Domain V: Monitor and Close Risks
A hybrid ERP project used an offshore data-migration vendor to reduce a schedule threat. That response created a recorded secondary risk: handoff delays could keep critical defects open past the release window. The risk register names the vendor manager as owner, sets a trigger of more than four critical defects older than 48 hours, and the sponsor’s threshold is no more than a 3-day release slip. In the last two releases, the trigger was breached and the planned daily overlap meeting was not implemented consistently. What should the project manager do next?
Best answer: B
Explanation: This is a monitoring problem, not a new planning exercise. The secondary risk already has an owner, trigger, and response, and current data shows both a breached trigger and weak response implementation, so the project manager should drive execution and update risk information now. Secondary risks must be monitored after a primary response creates them. In this case, the secondary risk was already identified and documented with a named owner, a trigger, and a planned response. Monitoring data now shows two important facts: the trigger has been breached in two releases, and the response action itself has not been implemented consistently. That means the project manager should focus on response execution and current exposure by ensuring the owner carries out the planned action and by updating the risk register and risk reporting.
Waiting for another review delays action despite a clear trend, and closing the risk ignores active exposure. Managing only the open defects treats part of the situation as issues but misses the still-uncertain risk of release delay.
Topic: Domain III: Risk Analysis
During qualitative risk analysis for a hospital billing-system upgrade, the team reviews this threat: the vendor may deliver the claims rules engine 3 weeks late. Project objectives are to go live by July 1, stay within budget, and meet a new billing regulation effective that same day. Which impact statement best reflects an objective-level impact?
Best answer: A
Explanation: Objective-level impact statements describe how a risk could affect a project objective such as schedule, cost, scope, quality, or compliance. The best choice connects the late vendor delivery to missing the regulatory date and increasing project cost. In risk analysis, impact should be expressed against project objectives, not as a vague concern or stakeholder preference. Here, the risk event is a late delivery from the vendor. The meaningful impact is whether that delay affects the project’s required outcomes: the July 1 go-live, compliance with the new regulation, and budget performance. The keyed statement does exactly that by linking the threat to a missed compliance date and extra testing cost. By contrast, doubts about the vendor, reluctance to move training, or user dissatisfaction may be relevant concerns, but they are not objective-level impacts unless they are translated into measurable effects on project objectives.
Topic: Domain IV: Risk Response
On a hybrid CRM rollout, the risk register names the integration manager as risk owner and the vendor architect as response action owner for a threat that middleware defects could delay go-live. The steering committee has low appetite for schedule delay. Governance says risks with residual delay above 10 days remain open, and only above 20 days are escalated. The prototype mitigation was completed, but testing still shows a likely 15-day delay. What should the project manager do?
Best answer: D
Explanation: Completing a response action does not close or transfer the risk itself. Because the residual exposure is still above the 10-day threshold, the risk stays open and the named risk owner must reassess remaining exposure and determine next steps. This scenario tests the difference between overall risk accountability and task accountability. The risk owner is responsible for monitoring the full risk, evaluating whether the response reduced exposure enough, and deciding whether more response, continued monitoring, or escalation is needed. The response action owner is responsible only for executing a specific assigned action. Here, the prototype mitigation was completed, but the remaining threat is still a likely 15-day delay. That is above the stated 10-day threshold, so the risk should remain open, but it is below the 20-day escalation trigger. The right action is to record the mitigation task as completed and have the integration manager, as risk owner, reassess residual exposure and update the response. Finishing a response task is not the same as owning or closing the risk.
Topic: Domain II: Risk Identification
A hybrid project team is identifying risks for a product release. Which situation should be recorded as one risk that includes both threat and opportunity aspects?
Best answer: B
Explanation: A risk has both threat and opportunity aspects when the same uncertain condition can help one objective and hurt another. Using the new API could accelerate integration yet increase rework, and it is still uncertain, so it fits that mixed classification. The key distinction is whether one uncertain event or condition can produce both beneficial and adverse effects. Adopting the vendor’s new API is still uncertain, so it is an individual risk, and it may improve schedule by speeding integration while also threatening schedule, cost, or quality through extra rework. That makes it a risk with both opportunity and threat aspects. By contrast, the failed security review is no longer uncertain, so it is an issue. The possible supplier discount is upside only, which makes it an opportunity. The possible customs delay is downside only, which makes it a threat. Look for one uncertainty with impacts in both directions.
Topic: Domain I: Risk Strategy and Planning
A hybrid project includes a predictive vendor-led network rollout and an agile team delivering customer features in 2-week sprints. The risk manager is updating the risk management plan and must set a review cadence and evidence sources that fit each delivery approach. Which tailoring is best?
Best answer: B
Explanation: Risk-planning cadence should match how uncertainty appears in the delivery approach. Agile work needs frequent review using current iteration evidence, while predictive work can align to milestones or phases using baseline and vendor evidence. In hybrid delivery, both rhythms should be planned and then rolled up into a common view. In Risk Strategy and Planning, the risk management plan should be tailored to the delivery approach, not forced into one generic review pattern. Agile teams learn and reprioritize quickly, so risk reviews should happen at a short cadence such as each sprint, using evidence like backlog change, dependency status, and recent iteration results. Predictive work has baselines, milestones, and contractual checkpoints, so risk reviews can align to those control points and use evidence such as schedule variance, milestone status, and vendor commitments. A hybrid project needs both cadences and both evidence types, then an integrated risk report to show overall exposure.
Using one cadence or waiting for advanced analysis would weaken planning quality.
Topic: Domain V: Monitor and Close Risks
On a hybrid ERP rollout, integration testing is 12% behind baseline. The risk management plan says any testing delay greater than 10% is a trigger to reassess the risk of missing the release window. The sponsor asks for the weekly risk update. What is the best way to report this variance?
Best answer: C
Explanation: Variance data alone is not enough in risk monitoring. Because the 12% testing delay exceeds the 10% trigger, the update should explain that release-window exposure has increased and note the current response status. In risk monitoring, variance becomes useful only when it is connected to risk significance. Here, the testing delay is not just a schedule metric; it breached a predefined trigger in the risk management plan. That means the related risk of missing the release window should be reassessed and communicated in terms of changed exposure and current response progress. A strong risk update turns performance data into decision-ready information for stakeholders.
Simply stating that testing is 12% behind describes performance, but it does not explain what that means for project risk.
Topic: Domain III: Risk Analysis
During qualitative risk analysis for a hybrid CRM project, the team uses this excerpt from the risk management plan:
Qualitative classification rules
- Score each risk as Probability (1-5) × highest impact on one project objective (1-5)
- Priority bands: 1-5 Low, 6-14 Medium, 15-25 High
- If the trigger is expected within 30 days, urgency may raise one band
- Stakeholder risk appetite guides response discussions only
Which proposed classification is INCONSISTENT with the plan?
Best answer: B
Explanation: The inconsistent classification is the one that labels a score of 6 as High only because the sponsor has low risk appetite. The plan requires classification by probability-impact bands, with urgency as the only stated reason to move a risk up one band. Qualitative risk classification must follow the criteria defined in the risk management plan. In this scenario, the plan says to classify risks by multiplying probability and the highest single-objective impact, then mapping that score to Low, Medium, or High. It also allows urgency to raise a risk one band when the trigger is expected within 30 days. The plan explicitly separates stakeholder risk appetite from classification, reserving appetite for later response discussions. That means a score of 6 cannot be reclassified as High just because a sponsor dislikes cost overruns; that would replace the approved classification method with an ad hoc preference. The key point is to use the plan’s thresholds consistently and not substitute other risk concepts for classification rules.
Topic: Domain IV: Risk Response
A hybrid product launch project has an identified risk that an external data feed may arrive late. If the file is more than 48 hours late, the agreed contingency is to load a reduced internal dataset. During the current sprint, the shared test environment has already failed, and the team can use a cloud sandbox as a workaround. On this project, risk owners monitor exposure, issue owners manage active issues, and action owners execute agreed response actions. Which responsibility assignment is most appropriate?
Best answer: D
Explanation: The late data feed is still uncertain, so its contingency should be executed by an assigned action owner if the trigger occurs. The failed test environment is already an issue, so the workaround belongs under issue management, not risk ownership. Match the responsibility to both the type of event and the type of role. A contingency is a preplanned response for a risk that may occur later, so the risk owner continues to monitor the exposure and trigger, while an action owner is assigned to carry out the contingency when needed. A workaround is different because the event has already happened; it is now an issue that needs immediate management. That makes the issue owner responsible for executing or coordinating the workaround. Assigning both items to a sponsor, project manager, or a single risk owner blurs the distinction between monitoring risk and resolving an active issue. The key discriminator is risk versus issue, then execution responsibility versus oversight.
Topic: Domain II: Risk Identification
A hybrid ERP rollout relies on several documented assumptions: the vendor API will remain stable, key users will be available for UAT, and infrastructure changes will be approved within 10 days. The team already has a schedule-cost model and wants to know which failed assumption would drive the biggest combined effect on project objectives. Which analysis approach is best?
Best answer: D
Explanation: Sensitivity analysis is best when several uncertain assumptions could affect multiple objectives and the team wants to know which one matters most. Here, the goal is to identify the strongest driver of schedule and cost impact, not just categorize or generally rate the risks. Assumption analysis starts by testing whether key planning assumptions could be false and what risks would result. When several assumptions are already identified and the team wants to know which failure would most strongly change project outcomes, sensitivity analysis is the best fit because it shows which uncertain input drives variation in objectives such as schedule and cost.
In this scenario, the project already has a schedule-cost model, so the team can examine how changes in each assumption affect overall results. That directly reveals the cascade from an assumption failure to project-objective impacts and helps focus response planning on the most influential assumption. A simple probability-impact rating is useful for prioritization, but it does not isolate the dominant driver across modeled assumptions.
Topic: Domain I: Risk Strategy and Planning
A hybrid data-platform rollout is defining its risk management plan. The governance board wants one monthly metric that shows whether overall project risk exposure is increasing or decreasing so it can decide when to add contingency funding or direct stronger responses. The risk register already records probability and impact scores for each risk. Which metric should the team define?
Best answer: D
Explanation: The needed metric must show overall exposure, not just volume, classification, or activity. Tracking the trend of total weighted risk exposure uses the existing probability-impact data to support monitoring and management decisions. When selecting risk metrics, the best choice is the one that directly supports the decision being made. In this case, leadership wants to know whether overall project risk exposure is trending up or down so it can adjust contingency and response intensity. Because the risk register already contains probability and impact scores, the team can aggregate those values and monitor the total over time. That provides a project-level view of changing exposure across review periods. A simple count of risks ignores severity, RBS grouping shows source categories rather than exposure level, and response completion tracks effort rather than effectiveness. A trend in total weighted exposure is the most decision-useful metric here because it links risk data to action.
Topic: Domain V: Monitor and Close Risks
A hybrid claims-system project is entering closeout. Overall project risk exposure has declined for three months, eight risks have passed their trigger dates, and two responses reduced exposure below the project’s threshold. One residual integration risk remains above threshold and will transfer to operations. The sponsor has low tolerance for unresolved exposure and wants a credible final risk narrative. What is the BEST action?
Best answer: A
Explanation: Risk closure should be evidence-based, not automatic at project end. The project should close only the risks that truly met closure criteria, capture response effectiveness and lessons learned, and keep the above-threshold residual risk visible in the final risk report and handoff. The key concept is that risk closure supports a trustworthy end-of-project risk story only when it distinguishes truly closed risks from remaining exposure. In this scenario, several risks can be closed because their triggers expired or their responses reduced exposure below the agreed threshold, but the residual integration risk is still above threshold and the sponsor has low tolerance for unresolved exposure. That means the project should document closure evidence in the risk register, record what worked for organizational learning, and explicitly report the transferred residual risk in the final risk narrative.
The closest distractor mentions the residual risk, but it omits the closure evidence and learning needed for a credible final narrative.
Topic: Domain III: Risk Analysis
In a hybrid product launch, a delay in supplier firmware could postpone system testing, compress end-user training, and force premium shipping for launch hardware. The risk team believes a simple probability-impact score understates this threat because the downstream effects may combine across schedule and cost. Which analysis approach best evaluates this cascading complexity?
Best answer: D
Explanation: Cascading complexity means the initial threat can ripple into several dependent impacts, not just one isolated consequence. Monte Carlo simulation is best here because it evaluates combined uncertainty across the integrated plan and shows how multiple downstream schedule and cost effects may accumulate. Cascading effects make a threat more complex when one uncertain event can propagate into additional impacts or secondary risks across different project objectives. In this scenario, the firmware delay can affect testing, training, and shipping, so treating it as one simple rating would flatten the chain of consequences. Monte Carlo simulation is the best fit because it analyzes uncertainty across the integrated schedule-cost model and estimates the range of total project outcomes when those effects interact.
When a threat can ripple through connected work, prefer analysis that captures interaction across the whole plan.
Topic: Domain IV: Risk Response
A hybrid project faces a threat that a critical component may arrive late. The agreed contingency is to buy from a backup supplier if the primary vendor slips by 5 days. During response planning, procurement confirms the backup source would raise project cost by 4%. The sponsor’s approved cost risk threshold is 3%, and finance must be notified if any planned response exceeds it. What should the project risk manager do next?
Best answer: A
Explanation: Response planning can show that the response itself changes exposure or crosses an approved threshold. Because the backup supplier would exceed the sponsor’s 3% cost threshold, the response must be reevaluated now and the required stakeholders informed rather than waiting for the trigger. In risk response, you do not wait for a trigger if response planning already shows that the planned action conflicts with approved thresholds or communication rules. Here, the threat has not occurred, but analysis of the contingency shows a 4% cost impact against a 3% threshold. That means the response needs immediate reevaluation, and the required stakeholders must be informed now.
The closest trap is waiting for the vendor-slip trigger, but the governance threshold has already been crossed by the proposed response.
Topic: Domain II: Risk Identification
A project manager is performing assumption analysis for a hybrid CRM rollout. The team has documented:
Which statement should be treated as an assumption rather than a confirmed project fact?
Best answer: C
Explanation: An assumption is something the team is currently treating as true for planning but has not yet verified. The statement about key-user availability in July depends on a future commitment, while the approval, signed contract term, and completed test result are already supported by evidence. In assumption analysis, the team separates verified information from statements that are still uncertain. A confirmed fact is backed by current evidence, such as an approval already given, a contract already signed, or a test result already recorded. An assumption is different: it is a planning basis that may be true, but it still depends on future conditions or validation.
Here, the statement about key users being available in July is forward-looking and could change because of operational priorities, staffing limits, or schedule conflicts. That makes it an assumption and a potential source of risk if it proves false. The other statements describe events or evidence that already exist, so they should be treated as facts, not assumptions.
A useful check is simple: if the team can prove it now, it is a fact; if it still depends on something happening later, it is an assumption.
Topic: Domain I: Risk Strategy and Planning
A hybrid project manager is tailoring the risk management plan for a hospital data-migration project. During stakeholder interviews, which statement most clearly means the plan must add a measurable risk threshold and escalation rule?
Best answer: A
Explanation: Stakeholder expectations create added planning requirements when they must be translated into explicit rules for handling future uncertainty. The statement about escalating any risk that could delay cutover by more than 5 days within 24 hours defines both a measurable threshold and a required escalation path. In risk strategy and planning, stakeholder expectations matter most when they establish how risks will be judged, prioritized, and escalated. A requirement to escalate any risk that could delay cutover by more than 5 days within 24 hours is not just a preference; it creates a specific planning rule. The project team should capture that threshold and escalation timing in the risk management plan so risks are assessed and reported consistently.
By contrast, a general statement of caution reflects risk appetite, an early warning sign is a trigger, and a problem already delaying testing is an issue that needs issue management. The key distinction is whether the stakeholder expectation defines a measurable rule for future risk handling.
Topic: Domain V: Monitor and Close Risks
A hybrid ERP project has a fixed regulatory go-live date. The risk management plan states that any critical-path schedule variance above 5% must be reported with its effect on overall risk exposure and recommended action. This month, verified data shows integration work moved from -2% to -6% after a vendor delay trigger occurred, and the sponsor has low appetite for date risk. What is the BEST action?
Best answer: C
Explanation: Variance is only useful for risk monitoring when its significance is explained. Here, the threshold is breached, the trigger is confirmed, and the sponsor is risk-averse about the date, so the report should connect the variance to increased schedule exposure and the response status. In Monitor and Close Risks, variance is performance data, not the full risk message. When verified variance crosses a stated threshold, the risk manager should explain what that change means for project objectives, link it to the relevant risk, and show whether the current response is adequate. In this scenario, integration variance moved past the 5% limit, the vendor-delay trigger has occurred, and the sponsor has low tolerance for any threat to the regulatory go-live date. The best action is to update the risk report so stakeholders see the increased schedule exposure, the breached threshold, and the response-owner actions or escalation needed.
Sending variance without significance leaves governance to guess the risk impact.
Topic: Domain III: Risk Analysis
A PMI-RMP is deciding which identified uncertainty on a hybrid hospital project should receive quantitative analysis instead of only qualitative analysis. Which situation shows the highest risk complexity?
Best answer: B
Explanation: Quantitative analysis is most appropriate when a risk is complex, materially affects several objectives, and can be modeled with credible data. The permit-timing uncertainty creates interconnected schedule and financial effects, and the stem states that usable ranges and response-cost data exist. Before choosing an analysis method, assess whether the risk is simple or complex. Quantitative analysis is justified when uncertainty has major potential impact, interdependencies across several project objectives, and enough numerical information to model likely outcomes. In the permit scenario, one uncertain event can ripple through testing, training, opening, and revenue start, so the project team needs more than a qualitative ranking; it needs forecasted exposure to support response decisions.
By contrast, a one-week supplier delay with ample float is a bounded threat that can usually be handled with qualitative assessment first. A possible vendor discount affects mainly one cost element, so it is a simpler opportunity. A server outage that has already happened is an issue, not a risk for analysis selection. The key discriminator is complexity plus modelable uncertainty.
Topic: Domain IV: Risk Response
A hybrid release planned to address deployment outage risk by switching to a standby vendor if the primary vendor failed cutover testing. The primary vendor fails testing, but the standby vendor is unavailable, and no other response was documented. The team creates a temporary phased rollout to reduce business disruption. Which concept best matches the phased rollout?
Best answer: C
Explanation: This is a workaround because the team had to invent a new response after the risk event occurred and the planned vendor-switch response was unavailable. Workarounds are used when no usable planned response remains or the original response proves insufficient. Workarounds are unplanned responses created after a risk event has occurred when the intended response cannot be used or does not adequately control the impact. In the scenario, the team did have a planned response—switching to a standby vendor—but that option was unavailable, and no other documented alternative existed. Creating a phased rollout at that moment is therefore a workaround. A contingency plan or fallback plan would have been prepared in advance and linked to trigger conditions in the risk register. Issue escalation may still happen if the team lacks authority, but escalation is not the name of the improvised response itself. The key clue is that the team had to devise a new action in real time.
Topic: Domain II: Risk Identification
During a risk identification workshop for a public-sector software rollout, the team is classifying new risks by origin in the risk register. Which risk should be tagged as having an external origin?
Best answer: B
Explanation: Risk origin identifies where the uncertainty comes from. A possible change in rules by a regulator is external because the source is outside the project team and its direct control. In risk identification, the team records not just the risk itself but also its origin or category in the risk register. This helps with analysis, ownership, and response planning. An external-origin risk comes from outside the project’s direct environment, such as regulators, suppliers, market conditions, or weather. An internal-origin risk comes from the project team, governance, estimates, resources, methods, or internal infrastructure.
A possible revision to data-retention rules by a regulator is external because the uncertainty is created by an outside authority. The other choices come from internal decision-making, team estimating, or internal readiness. The key distinction is the source of the uncertainty, not how large the impact might be.
Use these child pages when you want focused PM Mastery practice before returning to mixed sets and timed mocks.