Browse Certification Practice Tests by Exam Family

Series 86: Valuation and Forecasting

Try 10 focused Series 86 questions on Valuation and Forecasting, with explanations, then continue with the full Securities Prep practice test.

Series 86 Valuation and Forecasting questions help you isolate one part of the FINRA outline before returning to a mixed practice test. The questions below are original Securities Prep practice items aligned to this topic and are not copied from any exam sponsor.

Open the matching Securities Prep practice route for timed mocks, topic drills, progress tracking, explanations, and the full question bank.

Topic snapshot

ItemDetail
ExamFINRA Series 86
Official topicFunction 3 — Valuation and Forecasting
Blueprint weighting46%
Questions on this page10

Sample questions

Question 1

You are updating a quarterly forecast model for a U.S. GAAP industrial company and need to map expected items to the statement of cash flows. Which statement is INCORRECT?

  • A. Capital expenditures are typically shown as cash used in investing activities.
  • B. Cash proceeds from issuing long-term debt are shown as cash provided by financing activities.
  • C. An increase in accounts receivable is an operating cash inflow.
  • D. Dividends paid to common shareholders are typically shown as cash used in financing activities.

Best answer: C

Explanation: A receivables increase is a working-capital use of cash and reduces cash flow from operations.

Working-capital changes are part of cash flow from operations, and a rise in accounts receivable generally means revenue was recognized without collecting cash. That makes it a use of cash (reducing operating cash flow), not an operating cash inflow.

In a forecast, the statement of cash flows is organized into operating, investing, and financing sections. Operating cash flow starts from earnings (often net income) and adjusts for non-cash items and changes in working capital. An increase in accounts receivable means the company extended more credit or collected less of its sales in cash during the period, so cash is lower than accrual earnings; this reduces cash flow from operations. Investing cash flow typically captures purchases and sales of long-lived assets (for example, capital expenditures). Financing cash flow generally captures transactions with providers of capital, such as issuing or repaying debt and paying dividends. The key takeaway is that working-capital buildup (like higher receivables) is a cash outflow within operating activities.

  • Working capital sign error a receivables increase is a use of cash in operating activities.
  • Capex classification purchases of PP&E are investing cash outflows.
  • Debt issuance classification borrowing increases cash in financing activities.
  • Dividend classification dividends paid are financing cash outflows under U.S. GAAP.

Question 2

A company increases leverage by issuing new debt and using all proceeds to repurchase common shares at their current fair value. Assume the company’s enterprise value (EV) is unchanged by the recapitalization and ignore taxes and transaction costs.

Which statement is most accurate?

  • A. Total equity value falls by the increase in net debt, while per-share value is unchanged when repurchasing at fair value.
  • B. Per-share value decreases because issuing debt reduces equity value and that reduction must flow through to each share.
  • C. Total equity value is unchanged because fewer shares are outstanding after the repurchase.
  • D. Per-share value increases because repurchasing shares always raises earnings per share.

Best answer: A

Explanation: With EV unchanged, equity value equals EV minus net debt, so higher net debt lowers total equity value, but buying back shares at fair value keeps value per share the same.

Holding EV constant, raising net debt reduces total equity value because equity is the residual claim: \(\text{Equity Value}=\text{EV}-\text{Net Debt}\). If the firm repurchases shares at fair value, the reduced equity value is matched by a proportionate reduction in shares outstanding, leaving intrinsic value per share unchanged.

The core linkage is the bridge from enterprise value to equity value: \(\text{Equity Value}=\text{EV}-\text{Net Debt}\) (where net debt is debt minus cash). If EV does not change, increasing leverage (higher net debt) mechanically reduces total equity value dollar-for-dollar.

Per-share value depends on both the equity value and the share count. When the company uses the debt proceeds to repurchase shares at their current fair value, it is effectively exchanging cash (financed by debt) for shares at a “fair” price, so the reduction in equity value is offset by a proportional reduction in shares outstanding. As a result, intrinsic value per share is unchanged under the stated assumptions.

Per-share value would change only if EV changes or the repurchase price differs from fair value (or if taxes/other frictions are introduced).

  • Share-count fall alone is insufficient; equity value still drops when net debt rises with EV held constant.
  • EPS vs value confuses accounting with valuation; EPS accretion does not guarantee higher per-share intrinsic value.
  • Must fall per share is wrong because the share count can fall proportionately when repurchasing at fair value.

Question 3

A packaged-food company announces a catalyst: a 5% list price increase on ~70% of its portfolio effective next quarter. In the press release, management states it expects unit volumes to be “roughly flat” over the next 12 months and notes near-term input cost inflation that will pressure gross margin until contracts reset.

Analyst 1 raises next year revenue growth by increasing unit volumes and holds gross margin flat. Analyst 2 increases average selling price (ASP) on the affected mix, keeps unit volumes flat, and models near-term gross margin compression.

Which analyst update best maps the catalyst to the appropriate model drivers?

  • A. Analyst 2, because the catalyst affects ASP and near-term gross margin
  • B. Analyst 1, because pricing actions should be captured as multiple expansion
  • C. Analyst 2, because pricing actions are best modeled as lower capital expenditures
  • D. Analyst 1, because list price changes primarily drive unit volumes

Best answer: A

Explanation: A price increase is an ASP/mix driver, and management explicitly guides to flat units and near-term margin pressure from costs.

A list price increase is most directly reflected in revenue via higher ASP (and mix on the affected portfolio), not higher unit volumes. With management guiding to roughly flat units, the cleanest mapping is to keep volume assumptions unchanged. Input cost inflation that management says will pressure margins should be reflected as near-term gross margin compression.

Catalysts should be translated into the specific operating drivers they most directly affect. A broad list price increase primarily changes revenue through ASP (and mix if not all products are affected), while unit volume assumptions should follow explicit volume guidance and demand elasticity evidence. Here, management states unit volumes should be roughly flat, so raising volume growth contradicts the catalyst narrative.

Management also flags near-term input cost inflation that will pressure gross margin, so the model should reflect margin compression until costs reset (e.g., through supplier contracts or hedges). The key is aligning each forecast line item with the mechanism described: pricing ASP/mix; cost inflation gross margin; not valuation multiple or capex by default.

  • Volume-led revenue conflicts with management’s “roughly flat” unit outlook.
  • Multiple expansion shortcut is not an operating driver and doesn’t implement the earnings mechanism.
  • Capex change is not the primary linkage for a pricing/cost catalyst unless explicitly stated.

Question 4

You are building a relative valuation peer set (EV/Revenue and EV/EBITDA) for TargetCo. The goal is to select peers with similar business mix (recurring vs services/license), growth profile, and geography.

Exhibit: Company profiles (latest FY)

CompanyBusiness modelRecurring revenueRevenue from U.S.Primary end market3-yr revenue CAGREBITDA margin
TargetCoVertical SaaS95%90%U.S. outpatient healthcare providers20%25%
AdNovaDigital ads + data platform70%55%Global advertisers/consumer internet25%30%
ClinicWareVertical SaaS93%88%U.S. outpatient healthcare providers18%24%
IntegraITIT consulting/implementation35%80%U.S. enterprise IT projects10%12%
LegacySoftOn-prem license + maintenance60%85%U.S. industrial/manufacturing4%32%

Which candidate is best supported by the exhibit as the most appropriate peer for TargetCo in a comps-based valuation?

  • A. AdNova
  • B. IntegraIT
  • C. ClinicWare
  • D. LegacySoft

Best answer: C

Explanation: It most closely matches TargetCo’s vertical SaaS model, U.S. exposure, and mid-to-high growth profile.

A strong comps peer should resemble the target on the key value drivers investors use to price the multiples being compared. The exhibit shows ClinicWare aligns most closely with TargetCo on business mix (high recurring SaaS), geography (mostly U.S.), and growth (high-teens to ~20% CAGR). That makes its EV/Revenue and EV/EBITDA multiples more interpretable for TargetCo.

Peer selection for relative valuation is about matching the factors that drive differences in trading multiples, especially when using EV/Revenue and EV/EBITDA for software. The exhibit indicates TargetCo is a high-recurring, U.S.-centric vertical SaaS company serving outpatient healthcare providers with ~20% growth and mid-20s EBITDA margins. ClinicWare is the closest match on all three inclusion criteria: similar recurring revenue mix (93% vs 95%), similar U.S. revenue concentration (88% vs 90%), and a comparable growth profile (18% vs 20%) in the same end market. The other candidates introduce major comparability breaks (different monetization/end market, much lower recurring mix due to services, or substantially lower growth and different license economics), which would make their multiples less diagnostic for TargetCo.

Key takeaway: prioritize peers with similar business model, geography, and growth to reduce multiple “noise.”

  • Different end market/geo: the ads/data platform has a different customer base and much lower U.S. exposure.
  • Services-heavy model: the IT consulting firm’s low recurring mix and margin structure are not SaaS-like.
  • Mature license economics: the on-prem license company’s low growth and different revenue recognition/renewal dynamics reduce comparability.

Question 5

You cover a profitable mid-cap SaaS company that has historically traded at a premium EV/EBITDA multiple due to high expected growth and long-duration cash flows. Over the last month, 10-year Treasury yields rose about 100bp as the market priced in “higher-for-longer” policy, while company fundamentals and guidance were unchanged.

Which approach best aligns with durable research standards when assessing the risk that the stock’s valuation multiple could re-rate?

  • A. Link the macro move to WACC and run valuation sensitivities
  • B. Cut the target EV/EBITDA multiple by a fixed percentage
  • C. Raise revenue growth to offset the higher discount rate
  • D. Hold the multiple constant since guidance did not change

Best answer: A

Explanation: A rates-driven re-rating should be analyzed through discount-rate assumptions with transparent sensitivity and a sanity check versus peer-implied multiples.

A rise in long-term rates can compress valuation multiples, especially for long-duration growth equities, even when company fundamentals are unchanged. The most defensible approach is to connect the macro catalyst to discount-rate inputs (and, if used, terminal value assumptions), quantify the impact with sensitivities, and cross-check the resulting valuation versus comparable-company multiples under the new rate regime.

Macro catalysts like higher long-term rates often re-rate multiples by changing the discount rate investors apply to future cash flows; this effect is typically larger for “long-duration” growth stocks where more value comes from later years. Durable research practice is to make the mechanism explicit and quantify it, rather than applying an arbitrary multiple cut.

A sound workflow is:

  • Update the risk-free rate (and consider whether risk premia assumptions also change) in WACC.
  • Revalue using a sensitivity table around key macro-linked inputs (e.g., WACC and terminal assumptions).
  • Sanity-check the implied EV/EBITDA (or EV/FCF) versus peers and history under the new rates backdrop.

This keeps assumptions evidence-based, comparable across names, and transparent about uncertainty.

  • Arbitrary multiple haircut lacks a stated mechanism and reduces comparability across reports.
  • Ignore macro because guidance is flat misses that market discount rates can change valuations without any fundamental revision.
  • Offset with higher growth mixes unrelated assumptions and can mask the macro impact rather than measuring it.

Question 6

In an equity DCF, an analyst wants to show how the implied equity value range changes when key discount-rate and terminal-value assumptions vary. Which analysis feature best matches this purpose?

  • A. Scenario analysis on revenue growth and operating margin drivers
  • B. Two-way sensitivity table on WACC and terminal growth/exit multiple
  • C. Precedent transaction multiples with control-premium adjustments
  • D. Sum-of-the-parts valuation using segment EV/EBITDA multiples

Best answer: B

Explanation: Varying WACC and terminal assumptions directly shows how DCF value ranges change under different discount-rate and terminal-value inputs.

A DCF sensitivity that flexes WACC and terminal assumptions (terminal growth rate or exit multiple) isolates the two inputs that most often drive the present value of cash flows and the terminal value. Presenting the results as a two-way table communicates a valuation range and how sensitive the implied equity value is to these assumptions.

DCF sensitivity analysis is used to communicate how valuation changes when key, uncertain assumptions move. Two of the highest-impact assumptions are the discount rate (WACC), which affects the present value of all forecast cash flows, and the terminal assumption (perpetual growth rate or exit multiple), which often drives a large portion of enterprise value through terminal value. A two-way sensitivity table varies WACC across a reasonable range on one axis and the terminal assumption across a reasonable range on the other, producing a grid of implied values. This lets the analyst interpret a valuation range (e.g., “base case” cell with upside/downside cells around it) and identify whether the conclusion is robust or highly assumption-dependent. Other valuation tools may create ranges, but they are not specifically designed to isolate WACC-versus-terminal assumption effects in a DCF.

  • Transactions comps provide market-based benchmarks, but they do not isolate DCF WACC and terminal inputs.
  • Operating-driver scenarios stress the forecast (cash flows), not the discount rate and terminal assumption pair.
  • Sum-of-the-parts changes segment values via multiples, not DCF sensitivity to WACC and terminal value.

Question 7

You are building an assumption table for a subscription software company and want each major forecast input tied to evidence.

Exhibit (FY2025 actual; FY2026 company commentary):

ItemValue
FY2025 revenue$1,000 million
FY2025 ending subscribers2.0 million
FY2025 ARPU (revenue per subscriber)$500
FY2026 guidance (earnings call)Ending subscribers up ~10% YoY
FY2026 pricing disclosure (10-Q)3% list-price increase effective Jan 1, FY2026

Assume FY2026 revenue is approximated by ending subscribers \(\times\) ARPU. Which assumption-table entry is most appropriate for FY2026 revenue (includes the correct implied calculation and the best evidence/source mapping)?

  • A. Revenue +3.0% to $1,030m; 10% subs (call), 3% ARPU (10-Q)
  • B. Revenue +13.3% to $1,133m; 10% subs (industry report), 3% ARPU (press release)
  • C. Revenue +13.0% to $1,130m; 10% subs (call), 3% ARPU (10-Q)
  • D. Revenue +13.3% to $1,133m; 10% subs (call), 3% ARPU (10-Q)

Best answer: D

Explanation: It applies subscriber and price growth multiplicatively using the cited earnings-call guidance and 10-Q disclosure.

A good assumption table ties each driver to the most direct primary evidence and shows the arithmetic that converts drivers into the forecast. Here, subscriber growth and ARPU growth both affect revenue, so the implied revenue growth is \(1.10 \times 1.03 - 1 = 13.3\%\), producing $1,133 million from $1,000 million.

An assumption table should (1) name the forecast input, (2) show how it is quantified, (3) cite the best supporting source, and (4) reconcile to the implied forecast result. In this setup, revenue is driven by subscribers and ARPU, so you should use management’s subscriber guidance from the earnings call and the price increase disclosed in the 10-Q, then translate those drivers into an implied revenue number.

\[ \begin{aligned} \text{FY2026 revenue} &= 1{,}000 \times 1.10 \times 1.03 \\ &= 1{,}133 \,\text{(million)} \\ \text{Implied growth} &= 1.10 \times 1.03 - 1 = 13.3\% \end{aligned} \]

The key integrity check is using the right driver math (multiplicative compounding) and the most authoritative sources for each driver.

  • Additive growth error treats 10% and 3% as \(10\%+3\%\), understating compounded revenue.
  • Weak/indirect sourcing uses secondary sources when primary company disclosures for the inputs are provided.
  • Single-driver mistake applies only the price effect and ignores subscriber growth even though both drivers are specified.

Question 8

An analyst covers U.S. large-cap pharmaceutical companies. After an election, investors assign higher odds to legislation that would expand Medicare drug price negotiation starting in 2028. By midday, the Health Care sector index is down 2.8% while the S&P 500 is flat.

The analyst has already (1) summarized the proposal’s key provisions and timeline from public sources and (2) pulled the sector’s and peers’ relative price moves versus the market. What is the best next step to assess likely sector relative performance and update the valuation view?

  • A. Adjust valuation multiples to the new sector median and keep operating forecasts unchanged
  • B. Map each company’s Medicare exposure by product and timing, translate into pricing/volume assumptions, and run documented scenario sensitivities on forecasts and valuation
  • C. Immediately lower target prices by the sector’s intraday underperformance versus the S&P 500
  • D. Wait for company guidance on the next earnings call before changing any assumptions

Best answer: B

Explanation: Quantifying exposure and translating the policy catalyst into explicit model drivers enables a defensible view of relative impact versus the broader market.

A political catalyst affects sector relative performance through specific company and industry fundamentals (e.g., end-market exposure, pricing power, and timing). After capturing the event details and the market’s initial reaction, the next step is to quantify exposure and convert the catalyst into explicit forecast inputs. Scenario/sensitivity work ties the policy outcome to valuation in a way that can be compared across the sector and versus the broader market.

For macro/political events, the market’s first move is only a signal; an analyst’s job is to translate the catalyst into measurable drivers that explain why the sector should underperform or outperform the broader market. Here, the event is a higher probability of expanded Medicare price negotiation with a stated start date, so the workflow should move from “what happened” and “how did prices react” to “who is economically exposed and by how much.”

A practical next step is to:

  • Identify affected revenue pools (Medicare mix by product/portfolio and expected negotiation timing).
  • Convert policy risk into assumptions (price step-downs, rebate dynamics, demand/volume offsets).
  • Run scenarios/sensitivities and document inputs so impacts can be compared across covered names and versus the market baseline.

This sequencing avoids anchoring on price action or peer multiples without a fundamentals-based bridge to earnings and cash flow.

  • Chasing price action treats relative performance as the input rather than something to be explained with updated fundamentals.
  • Multiple-only adjustment skips the key step of linking the policy catalyst to earnings/cash-flow drivers that justify any rerating.
  • Waiting for guidance delays analysis even though the proposal’s mechanics and portfolio exposures can be estimated from disclosures and public data.

Question 9

An analyst is valuing DeltaCo using price-to-free-cash-flow (P/FCF), defined as market capitalization divided by levered free cash flow (FCF). All amounts are in USD.

Exhibit: DeltaCo (TTM)

ItemAmount
Market capitalization$5.0 billion
Reported FCF$500 million
Includes a one-time working-capital inflow from stretching payables (non-recurring)$150 million

Peers trade at a median P/FCF of 12x. If the analyst uses DeltaCo’s reported FCF (unadjusted) to compute P/FCF and compare to peers, what is the most likely outcome?

  • A. The peer comparison is unaffected because P/FCF uses market cap, not cash flow
  • B. DeltaCo will screen cheaper than it truly is, biasing valuation upward
  • C. DeltaCo will screen more expensive than it truly is, biasing valuation downward
  • D. The error mainly affects EV/EBITDA, not P/FCF, because working capital is excluded from cash flow

Best answer: B

Explanation: Using a one-time working-capital inflow overstates FCF, understating P/FCF and making DeltaCo appear undervalued versus peers.

Reported FCF is inflated by a non-recurring working-capital inflow, so dividing market cap by that higher FCF produces an artificially low P/FCF. That makes DeltaCo appear to generate higher-quality, more sustainable cash than it really does. The relative valuation conclusion would therefore be biased toward undervaluation.

P/FCF compares equity value (market cap) to the company’s ability to generate cash available to equity holders. If reported FCF is temporarily boosted by a one-time working-capital inflow (for example, delaying payables), the denominator is overstated.

Using the exhibit:

  • Reported P/FCF \(= 5{,}000/500 = 10\times\)
  • Normalized FCF \(= 500 - 150 = 350\)
  • Normalized P/FCF \(= 5{,}000/350 \approx 14.3\times\)

Using the unadjusted 10x multiple versus a 12x peer median would incorrectly suggest DeltaCo is cheap and has strong cash generation quality, when normalization shows it screens more expensive.

  • Wrong direction the “more expensive” outcome ignores that inflating FCF mechanically lowers P/FCF.
  • Denominator matters the comparison changes because cash flow is part of the multiple.
  • Cash flow linkage working capital is included in operating cash flow, so it directly impacts FCF and therefore P/FCF.

Question 10

You cover a small-cap specialty retailer with only a few active market makers and an average daily dollar volume under $5 million. On a day when broader equity volatility is elevated, the stock opens up 11% after reporting EPS and revenue roughly in line with consensus and reaffirming prior guidance. In the first 15 minutes, trading volume is only ~20% of the stock’s typical 15-minute open volume, and the bid-ask spread is ~2% versus a normal ~0.3%. With no new 8-K, transcript, or incremental news, what is the single best research conclusion about this price move for your catalyst note?

  • A. Immediately raise the target price to match the new opening print as the best fair value estimate
  • B. Conclude informed investors are accumulating; the low volume confirms stealth buying
  • C. Assume the market is efficiently pricing in positive guidance not yet reflected in consensus
  • D. Treat the move as a noisy signal; thin liquidity and wide spreads can distort price discovery

Best answer: D

Explanation: Low volume plus a sharply wider bid-ask spread in an illiquid name suggests order imbalance/noise rather than strong information-driven repricing.

In illiquid equities, price discovery can be dominated by trading frictions such as wide bid-ask spreads and temporary order imbalances, especially during high-volatility regimes. Because the company’s reported results and guidance were in line and there is no incremental information flow, the combination of low early volume and a much wider spread makes the opening jump a less reliable signal of a new fundamental valuation level.

Price discovery is strongest when an equity is liquid (tight spreads, deep order book, steady volume) and when material information is broadly and quickly disseminated. Here, the stock is structurally illiquid and, on a high-volatility day, the opening move occurs on unusually low volume and an abnormally wide bid-ask spread—conditions consistent with higher transaction costs and greater sensitivity to small trades.

When information flow is limited (no new filing, transcript, or guidance change), a large price change is more likely to reflect:

  • temporary order imbalance at the open
  • wider spreads and thin depth amplifying each trade’s price impact
  • higher short-term volatility increasing noise around “true” value

The appropriate analyst takeaway is to be cautious in interpreting the print as a clean fundamental repricing until liquidity/volume normalizes and incremental information is identified.

  • Hidden guidance assumption fails because the stem states guidance was reaffirmed and no incremental disclosures were available.
  • Mark-to-market target fails because a single illiquid, wide-spread opening print is not a stable fair-value anchor.
  • Stealth accumulation narrative fails because low volume and wide spreads are consistent with noise/price impact, not confirmatory evidence of informed buying.

Continue with full practice

Use the Series 86 Practice Test page for the full Securities Prep route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Free review resource

Use the Series 86 Cheat Sheet on SecuritiesMastery.com when you want a compact review before returning to the FINRA Series 86 Practice Test page.

Revised on Sunday, May 3, 2026