Browse Certification Practice Tests by Exam Family

CISI Risk: Model Risk

Try 10 focused CISI Risk questions on Model Risk, with answers and explanations, then continue with Securities Prep.

On this page

Open the matching Securities Prep practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routeCISI Risk
IssuerCISI
Topic areaModel Risk
Blueprint weight3%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Model Risk for CISI Risk. Work through the 10 questions first, then review the explanations and return to mixed practice in Securities Prep.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 3% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original Securities Prep practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Model Risk

A firm’s market-risk model produces a daily VaR figure. During unusually illiquid conditions, the risk committee requires users to consider current market information and document any management overlay rather than rely on the model output alone. This practice is best described as:

  • A. Independently validating model design and assumptions
  • B. Applying expert judgement with a documented overlay
  • C. Comparing model forecasts with actual outcomes
  • D. Running severe but plausible stress scenarios

Best answer: B

What this tests: Model Risk

Explanation: This describes expert judgement being used alongside a model, not automatic reliance on the model output. A documented management overlay is a model-governance control that recognises models have limits, especially when market conditions differ from normal assumptions.

The core concept is that a model should support decisions, not replace judgement. In the stem, the VaR figure is still used, but users must also consider current information and record any management overlay during illiquid conditions. That is a clear example of expert judgement supplementing model output when the model may be less reliable because market conditions fall outside its assumptions or data history.

A documented overlay is important because it helps ensure:

  • model limitations are recognised
  • departures from model output are justified
  • decisions can be reviewed and challenged later

Independent validation is the closest distractor, but that is a separate control that tests the model itself rather than guiding how a live decision should use model output.

  • Validation: this checks whether the model is conceptually sound and fit for purpose, but it is not the day-to-day use of judgement over a live output.
  • Back-testing: this compares predictions with realised results to assess performance over time, not to adjust a current decision.
  • Stress testing: this explores outcomes under extreme conditions, but it does not itself describe a documented managerial overlay on a model result.

A documented overlay means the model informs the decision, but informed human judgement can adjust for factors the model may not capture.


Question 2

Topic: Model Risk

A bank has a concentrated portfolio of variable-rate buy-to-let mortgages. Interest rates have risen sharply and arrears are beginning to increase. The risk committee asks for a model that estimates default rates and losses under different future rate paths. What is the main benefit of using such a model?

  • A. Estimating losses under different scenarios to support pricing, limits and capital planning
  • B. Replacing case-by-case underwriter judgement with automatic lending decisions
  • C. Producing one precise forecast that removes uncertainty
  • D. Ensuring defaults stay within risk appetite once the model is approved

Best answer: A

What this tests: Model Risk

Explanation: The key benefit of modelling here is forward-looking quantification. By estimating losses under alternative rate scenarios, management can compare actions and make better decisions on pricing, limits and capital before credit deterioration worsens.

Modelling is most useful when it turns uncertain risk drivers into quantified, comparable estimates that support decisions. In this case, the bank has a concentrated variable-rate mortgage book, rates have risen, and arrears are already increasing. A default-and-loss model can show how outcomes may change under different future rate paths, helping the risk committee assess whether pricing, lending limits, provisions and capital remain appropriate.

This is a decision-support benefit: the model helps management test assumptions, compare scenarios and act earlier. It does not guarantee outcomes, remove the need for expert judgement, or eliminate uncertainty. Used properly, a model informs management decisions rather than replacing them.

  • Guarantee misconception: Model approval cannot ensure defaults remain within risk appetite; borrower behaviour and market conditions can still deteriorate.
  • Automation misconception: Models can inform underwriting, but they do not remove the need for judgement, overrides and governance.
  • Certainty misconception: A useful model shows sensitivity to assumptions and possible ranges of loss; it does not make uncertainty disappear.

It helps management quantify how credit risk may change under alternative assumptions and use that information in forward-looking decisions.


Question 3

Topic: Model Risk

Before a new credit-risk model is put into use, a team separate from the developers tests its methodology, checks data and implementation, challenges key assumptions, and reports weaknesses to the committee that will decide whether the model may be used. Which governance function does this best describe?

  • A. Internal audit assurance
  • B. Ongoing model performance monitoring
  • C. Senior management model approval
  • D. Independent model validation

Best answer: D

What this tests: Model Risk

Explanation: The function described is independent model validation. It provides effective challenge to the developers by reviewing methodology, data and implementation before a separate body makes the approval decision. That separation is a core governance principle for model risk.

Effective model governance separates development, validation, approval and ongoing monitoring. The described team is not building the model and is not giving the final sign-off for use. Instead, it is independently assessing whether the model is conceptually sound, correctly implemented, and fit for purpose, while identifying limitations and challenging assumptions. That is the role of model validation.

In a strong framework, validation findings are given to the relevant approval body, which then decides whether the model can be used, restricted or remediated. After approval, the model should remain under ongoing oversight through performance monitoring, periodic review and escalation of weaknesses. Internal audit sits further back, providing assurance that the overall model-risk framework and governance are effective. The key distinction is that validation supplies independent evidence and challenge; approval is the governance decision.

  • Approval is separate: the committee decides whether the model can be used; the stem describes the independent testing that informs that decision.
  • Monitoring comes later: performance monitoring tracks drift, stability and outcomes after deployment, not the pre-use review described.
  • Audit is broader: internal audit reviews the effectiveness of the model-risk framework and governance, rather than performing routine model-by-model validation.

It is an independent challenge process that assesses conceptual soundness, implementation and limitations before approval.


Question 4

Topic: Model Risk

Which statement best reflects sound model-risk governance and challenge?

  • A. Rely on senior management approval once the model has gone live.
  • B. Treat good back-testing as sufficient evidence of continued reliability.
  • C. Separate validation from development, challenge assumptions, and monitor performance over time.
  • D. Let the development team validate the model to improve efficiency.

Best answer: C

What this tests: Model Risk

Explanation: Sound model-risk governance depends on independent validation and continuing challenge. A model can perform well initially yet still fail later because assumptions, data quality, implementation, or use change over time.

The core principle is that model risk is reduced by independent oversight, not by confidence in the model owner or a single approval step. Good governance separates validation from development so that assumptions, methodology, data, implementation, and intended use are challenged objectively. It also requires ongoing performance monitoring, because a model that once appeared reliable may become unsuitable as markets, portfolios, or behaviours change.

Back-testing can be helpful, but it is only one validation tool and does not prove a model is always fit for purpose. Senior management approval supports accountability, but it does not replace technical challenge. The closest distractors confuse useful controls with complete governance; sound model-risk management needs independence, challenge, and continuous review together.

  • Senior approval supports governance, but it does not replace independent technical validation or challenge.
  • Back-testing only tests part of model performance, usually against past outcomes, and may miss changed conditions or misuse.
  • Developer-led validation may be quicker, but it weakens independence and increases confirmation bias.

Sound governance requires independent challenge and ongoing monitoring rather than relying on one-off approval or model owners alone.


Question 5

Topic: Model Risk

A retail bank’s treasury desk funds long-dated mortgage assets with short-term wholesale borrowing. After a market funding shock, rollover of that borrowing is uncertain and corporate customers are expected to draw committed facilities. The CRO wants a model that estimates daily net cash outflows and the bank’s survival horizon. Which model is most appropriate?

  • A. PD/LGD/EAD credit risk model
  • B. Operational loss distribution model
  • C. Liquidity stress-testing cash-flow projection model
  • D. Value-at-risk market risk model

Best answer: C

What this tests: Model Risk

Explanation: The scenario is about whether the bank can keep meeting cash obligations if wholesale funding does not roll and committed lines are drawn. That is a liquidity risk question, so a liquidity stress-testing cash-flow projection is the best model.

The core concept is matching the model to the risk being measured. Liquidity risk models focus on the timing of cash inflows and outflows under normal and stressed conditions, including funding rollover failure, drawdowns on committed facilities, deposit withdrawals, and collateral calls. Here, the CRO wants to know daily net cash outflows and survival horizon, which are classic liquidity stress-testing outputs. A market risk model such as VaR would estimate losses from adverse price moves, not cash survival. A credit model using PD, LGD, and EAD would assess borrower default risk, not funding pressure. An operational loss distribution model is designed for process, system, people, or external-event losses. The key takeaway is that funding resilience is measured with liquidity stress models, even when the trigger is a market shock.

  • Market risk trap: Value-at-risk estimates loss from adverse price moves, but the scenario centres on funding rollover and cash shortfall timing.
  • Credit risk trap: PD/LGD/EAD models assess borrower default probability and loss severity, not the bank’s survival horizon under stressed outflows.
  • Operational risk trap: Loss distribution models estimate operational loss frequency and severity, not liquidity pressure from disrupted funding markets.

It projects stressed inflows, outflows, drawdowns, and funding rollover assumptions to estimate cash shortfalls and survival horizon.


Question 6

Topic: Model Risk

Senior management uses a portfolio model to compare how a proposed increase in leveraged lending would affect expected loss, earnings volatility and capital usage in both base and stressed conditions. Which main benefit of modelling does this best illustrate?

  • A. Quantifying future outcomes to support strategic decisions
  • B. Testing predictions against actual results
  • C. Setting the board’s risk appetite and limits
  • D. Providing independent challenge to model design

Best answer: A

What this tests: Model Risk

Explanation: The key benefit shown is forward-looking decision support. The model converts assumptions about a proposed lending change into quantified estimates of loss, volatility and capital usage, helping management compare choices before committing resources.

A core benefit of modelling in risk management is that it provides a structured, forward-looking view of possible outcomes. In the stem, management is not using the model to check whether the model itself is sound, nor to carry out a governance approval step. Instead, it is using the model to estimate how a business decision could affect risk and capital under different conditions, including stress. That is classic decision support: models help firms compare alternatives, assess downside risk, and understand trade-offs before taking action.

Models can inform governance and must be validated and monitored, but those are separate functions. The main benefit here is better-informed decision-making through quantified scenario analysis.

  • Independent challenge refers to model validation or review, which is a control over model risk rather than the main benefit of using the model for business decisions.
  • Risk appetite and limits are set through governance by the board and senior management; models may inform these choices but do not replace that responsibility.
  • Testing against actual results is back-testing, a performance-checking tool used after outcomes are observed, not the forward-looking benefit shown in the stem.

The model is being used to estimate and compare likely effects under different assumptions before a decision is taken.


Question 7

Topic: Model Risk

A bank uses an internal cash-flow model to set daily liquidity buffers. Its model governance policy states that if the absolute forecasting error is more than 5% in two consecutive months, the model must be escalated for independent validation and re-approval before it is used again for limit-setting.

MonthForecast outflow (£m)Actual outflow (£m)
Jan200198
Feb100106
Mar100106

Based on the policy and the figures, which action best reflects effective model governance?

  • A. Recalibrate by the model owner and continue using it without escalation
  • B. Wait for the next annual validation cycle before any escalation
  • C. Keep using it because the average signed monthly error is below 5%
  • D. Escalate for independent validation and re-approval before further limit-setting use

Best answer: D

What this tests: Model Risk

Explanation: The absolute forecasting errors are 1%, 6% and 6%. Because two consecutive months exceed the 5% trigger, effective ongoing oversight requires escalation for independent validation and re-approval before the model continues to be used for limit-setting.

This is a model-governance trigger identified through ongoing monitoring. The policy does not ask for a net or average signed error; it asks whether absolute error is above 5% in two consecutive months.

  • Jan: 1%
  • Feb: 6%
  • Mar: 6%

February and March both breach the threshold, so the model should be independently validated and taken back through the approval process before it continues to set liquidity limits. The key principle is that model owners may remediate, but validation, challenge and approval must be independent. The closest trap is averaging or netting the monthly errors, which ignores the stated escalation rule.

  • Recalibration by the model owner may help fix the model, but it does not replace independent challenge and re-approval once the trigger is hit.
  • Using a signed average nets January’s offsetting error against later breaches; the policy is based on absolute error in consecutive months.
  • Waiting for the annual review weakens ongoing oversight, because the trigger has already been breached and requires action now.

February and March each have a 6% absolute forecasting error, so the policy trigger is met and independent validation plus re-approval is required.


Question 8

Topic: Model Risk

A bank’s treasury desk uses a cash-flow model to set internal liquidity limits. The model assumes deposit outflows similar to the last three years, but after a social-media-driven run at a peer bank the bank’s actual outflows exceed the modelled stress. The model’s annual independent validation is three months overdue. Which response best reflects sound model-risk governance and challenge?

  • A. Allow the treasury desk to recalibrate and use the new output immediately
  • B. Stop all model use until a full replacement is built
  • C. Keep current limits because last year’s approval still stands
  • D. Escalate the issue, apply a conservative overlay, and require independent validation before resetting limits

Best answer: D

What this tests: Model Risk

Explanation: The model has shown signs of being unreliable in changed market conditions, and its independent validation is overdue. Sound model-risk governance is to escalate, use a prudent overlay, and obtain independent challenge before the model continues to determine liquidity limits.

Model-risk governance is not just about whether a model was once approved; it is about whether it remains fit for purpose when conditions change. Here, actual outflows have exceeded the stressed outputs, which suggests the model’s assumptions or calibration may no longer be appropriate, and the overdue validation weakens assurance further. The best response is to escalate through governance, place a conservative overlay or other usage constraint on the model, and obtain independent validation or targeted review before resetting limits. That keeps decisions prudent while the model is reassessed. Relying on old approval ignores evidence of model weakness, while letting the model owner recalibrate and implement immediately removes independent challenge. A total stop is usually disproportionate when interim controls can manage the risk.

  • Past approval is not enough when observed outflows exceed stressed outputs; approval does not remove the need for renewed challenge after conditions change.
  • Self-recalibration by the treasury desk may help diagnose the issue, but immediate use without independent review creates owner bias and weakens governance.
  • Full shutdown is usually too extreme; firms typically use overlays, restrictions, and contingency controls while validation is completed.

This recognises that assumptions may no longer hold, so the issue should be escalated, reliance constrained, and independent challenge restored before model outputs drive limits.


Question 9

Topic: Model Risk

A bank values an illiquid note using an approved pricing model. Policy requires escalation for valuation uncertainty if a plausible valuation falls below £10.0m.

Model outputs under plausible inputs

  • Base calibration: £10.4m
  • Stressed correlation input: £9.7m
  • Wider bid-offer input: £9.5m

Which conclusion best reflects sound model-risk practice?

  • A. Use the average, about £9.9m, as definitive.
  • B. Ignore the model and price the note manually.
  • C. Use the outputs as a range, apply judgement, and escalate.
  • D. Accept £10.4m as the definitive valuation.

Best answer: C

What this tests: Model Risk

Explanation: The model is still useful, but the figures show that the valuation is sensitive to plausible assumptions. Because credible outputs fall below the £10.0m threshold, the result should be treated as an input to judgement and escalated, not relied on mechanically as a single final number.

This tests a core model-risk principle: an approved model is not a substitute for judgement when outputs vary materially under plausible assumptions. Here, the valuation range is £9.5m to £10.4m, and two credible outputs are below the firm’s escalation threshold. That means the bank should not rely only on the base case or on a simple average.

  • Identify the range of plausible outputs.
  • Compare that range with the policy trigger.
  • Escalate because the range crosses the threshold.
  • Use expert judgement to assess valuation uncertainty.

Averaging the results may summarise them, but it does not remove the uncertainty shown by the range.

  • Base-case fixation: The approved base calibration is not automatically definitive when other plausible inputs produce materially lower values.
  • False precision: Averaging to about £9.9m may summarise the outputs, but it does not remove the fact that the range crosses the review threshold.
  • Overreaction: Discarding the model entirely is unnecessary; the correct response is to use it with oversight and judgement.

Plausible input changes push the valuation below the policy threshold, so the model should inform judgement and escalation rather than act as a final answer.


Question 10

Topic: Model Risk

A bank’s credit trading desk holds a concentrated position in BBB corporate bonds. Its daily VaR model was calibrated during a long low-volatility period and assumes positions can be reduced in normal market liquidity. After a sharp spread widening, the desk is still within its VaR limit, but bid-offer spreads have widened materially. What is the single best response by the risk function?

  • A. Raise the desk’s VaR limit temporarily so it is not forced to reduce the position in stressed markets.
  • B. Accept the VaR limit as sufficient because the model was previously validated.
  • C. Use the VaR result as one input, add stress and liquidity analysis, and escalate the position for senior review.
  • D. Stop using the model and manage the position only through trader experience until markets normalise.

Best answer: C

What this tests: Model Risk

Explanation: The best answer is to treat the model as a decision aid, not a substitute for judgment. Here, concentration, weaker liquidity, and a regime change mean the VaR output may understate the true risk, so it should be supplemented with stress testing, liquidity assessment, and senior oversight.

This tests a core model-risk principle: a model is useful within its assumptions, but it should not replace informed judgment when conditions move outside those assumptions. In the scenario, the VaR model was calibrated in a benign period and assumes normal liquidity, yet the desk now faces spread widening, poorer market depth, and concentration risk. Those facts mean the reported VaR may no longer capture the risk of exiting or hedging the position.

A sound response is to keep using the model output, but only as one input alongside:

  • stress tests for larger spread moves
  • liquidity analysis for exit costs and market depth
  • senior review of limits and concentration

The closest distractor is abandoning the model completely, but that overreacts; the right approach is to supplement, not blindly follow or discard, the model.

  • Validated model trap: prior validation does not mean the model is reliable under changed market conditions or broken assumptions.
  • Judgment-only trap: expert judgment matters, but discarding the model entirely removes a useful structured input.
  • Limit-relief trap: increasing limits because markets are stressed weakens control at the point when risk measurement is least certain.

The model remains useful, but its assumptions are under strain, so judgment and supplementary analysis are needed before relying on the limit signal.

Continue with full practice

Use the CISI Risk Practice Test page for the full Securities Prep route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching Securities Prep practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the CISI Risk guide on SecuritiesMastery.com, then return to Securities Prep for timed practice.

Revised on Thursday, May 14, 2026