Try 10 focused Series 86 questions on Valuation and Forecasting, with explanations, then continue with the full Securities Prep practice test.
Series 86 Valuation and Forecasting questions help you isolate one part of the FINRA outline before returning to a mixed practice test. The questions below are original Securities Prep practice items aligned to this topic and are not copied from any exam sponsor.
| Item | Detail |
|---|---|
| Exam | FINRA Series 86 |
| Official topic | Function 3 — Valuation and Forecasting |
| Blueprint weighting | 46% |
| Questions on this page | 10 |
You are updating a quarterly forecast model for a U.S. GAAP industrial company and need to map expected items to the statement of cash flows. Which statement is INCORRECT?
Best answer: C
Explanation: A receivables increase is a working-capital use of cash and reduces cash flow from operations.
Working-capital changes are part of cash flow from operations, and a rise in accounts receivable generally means revenue was recognized without collecting cash. That makes it a use of cash (reducing operating cash flow), not an operating cash inflow.
In a forecast, the statement of cash flows is organized into operating, investing, and financing sections. Operating cash flow starts from earnings (often net income) and adjusts for non-cash items and changes in working capital. An increase in accounts receivable means the company extended more credit or collected less of its sales in cash during the period, so cash is lower than accrual earnings; this reduces cash flow from operations. Investing cash flow typically captures purchases and sales of long-lived assets (for example, capital expenditures). Financing cash flow generally captures transactions with providers of capital, such as issuing or repaying debt and paying dividends. The key takeaway is that working-capital buildup (like higher receivables) is a cash outflow within operating activities.
A company increases leverage by issuing new debt and using all proceeds to repurchase common shares at their current fair value. Assume the company’s enterprise value (EV) is unchanged by the recapitalization and ignore taxes and transaction costs.
Which statement is most accurate?
Best answer: A
Explanation: With EV unchanged, equity value equals EV minus net debt, so higher net debt lowers total equity value, but buying back shares at fair value keeps value per share the same.
Holding EV constant, raising net debt reduces total equity value because equity is the residual claim: \(\text{Equity Value}=\text{EV}-\text{Net Debt}\). If the firm repurchases shares at fair value, the reduced equity value is matched by a proportionate reduction in shares outstanding, leaving intrinsic value per share unchanged.
The core linkage is the bridge from enterprise value to equity value: \(\text{Equity Value}=\text{EV}-\text{Net Debt}\) (where net debt is debt minus cash). If EV does not change, increasing leverage (higher net debt) mechanically reduces total equity value dollar-for-dollar.
Per-share value depends on both the equity value and the share count. When the company uses the debt proceeds to repurchase shares at their current fair value, it is effectively exchanging cash (financed by debt) for shares at a “fair” price, so the reduction in equity value is offset by a proportional reduction in shares outstanding. As a result, intrinsic value per share is unchanged under the stated assumptions.
Per-share value would change only if EV changes or the repurchase price differs from fair value (or if taxes/other frictions are introduced).
A packaged-food company announces a catalyst: a 5% list price increase on ~70% of its portfolio effective next quarter. In the press release, management states it expects unit volumes to be “roughly flat” over the next 12 months and notes near-term input cost inflation that will pressure gross margin until contracts reset.
Analyst 1 raises next year revenue growth by increasing unit volumes and holds gross margin flat. Analyst 2 increases average selling price (ASP) on the affected mix, keeps unit volumes flat, and models near-term gross margin compression.
Which analyst update best maps the catalyst to the appropriate model drivers?
Best answer: A
Explanation: A price increase is an ASP/mix driver, and management explicitly guides to flat units and near-term margin pressure from costs.
A list price increase is most directly reflected in revenue via higher ASP (and mix on the affected portfolio), not higher unit volumes. With management guiding to roughly flat units, the cleanest mapping is to keep volume assumptions unchanged. Input cost inflation that management says will pressure margins should be reflected as near-term gross margin compression.
Catalysts should be translated into the specific operating drivers they most directly affect. A broad list price increase primarily changes revenue through ASP (and mix if not all products are affected), while unit volume assumptions should follow explicit volume guidance and demand elasticity evidence. Here, management states unit volumes should be roughly flat, so raising volume growth contradicts the catalyst narrative.
Management also flags near-term input cost inflation that will pressure gross margin, so the model should reflect margin compression until costs reset (e.g., through supplier contracts or hedges). The key is aligning each forecast line item with the mechanism described: pricing ASP/mix; cost inflation gross margin; not valuation multiple or capex by default.
You are building a relative valuation peer set (EV/Revenue and EV/EBITDA) for TargetCo. The goal is to select peers with similar business mix (recurring vs services/license), growth profile, and geography.
Exhibit: Company profiles (latest FY)
| Company | Business model | Recurring revenue | Revenue from U.S. | Primary end market | 3-yr revenue CAGR | EBITDA margin |
|---|---|---|---|---|---|---|
| TargetCo | Vertical SaaS | 95% | 90% | U.S. outpatient healthcare providers | 20% | 25% |
| AdNova | Digital ads + data platform | 70% | 55% | Global advertisers/consumer internet | 25% | 30% |
| ClinicWare | Vertical SaaS | 93% | 88% | U.S. outpatient healthcare providers | 18% | 24% |
| IntegraIT | IT consulting/implementation | 35% | 80% | U.S. enterprise IT projects | 10% | 12% |
| LegacySoft | On-prem license + maintenance | 60% | 85% | U.S. industrial/manufacturing | 4% | 32% |
Which candidate is best supported by the exhibit as the most appropriate peer for TargetCo in a comps-based valuation?
Best answer: C
Explanation: It most closely matches TargetCo’s vertical SaaS model, U.S. exposure, and mid-to-high growth profile.
A strong comps peer should resemble the target on the key value drivers investors use to price the multiples being compared. The exhibit shows ClinicWare aligns most closely with TargetCo on business mix (high recurring SaaS), geography (mostly U.S.), and growth (high-teens to ~20% CAGR). That makes its EV/Revenue and EV/EBITDA multiples more interpretable for TargetCo.
Peer selection for relative valuation is about matching the factors that drive differences in trading multiples, especially when using EV/Revenue and EV/EBITDA for software. The exhibit indicates TargetCo is a high-recurring, U.S.-centric vertical SaaS company serving outpatient healthcare providers with ~20% growth and mid-20s EBITDA margins. ClinicWare is the closest match on all three inclusion criteria: similar recurring revenue mix (93% vs 95%), similar U.S. revenue concentration (88% vs 90%), and a comparable growth profile (18% vs 20%) in the same end market. The other candidates introduce major comparability breaks (different monetization/end market, much lower recurring mix due to services, or substantially lower growth and different license economics), which would make their multiples less diagnostic for TargetCo.
Key takeaway: prioritize peers with similar business model, geography, and growth to reduce multiple “noise.”
You cover a profitable mid-cap SaaS company that has historically traded at a premium EV/EBITDA multiple due to high expected growth and long-duration cash flows. Over the last month, 10-year Treasury yields rose about 100bp as the market priced in “higher-for-longer” policy, while company fundamentals and guidance were unchanged.
Which approach best aligns with durable research standards when assessing the risk that the stock’s valuation multiple could re-rate?
Best answer: A
Explanation: A rates-driven re-rating should be analyzed through discount-rate assumptions with transparent sensitivity and a sanity check versus peer-implied multiples.
A rise in long-term rates can compress valuation multiples, especially for long-duration growth equities, even when company fundamentals are unchanged. The most defensible approach is to connect the macro catalyst to discount-rate inputs (and, if used, terminal value assumptions), quantify the impact with sensitivities, and cross-check the resulting valuation versus comparable-company multiples under the new rate regime.
Macro catalysts like higher long-term rates often re-rate multiples by changing the discount rate investors apply to future cash flows; this effect is typically larger for “long-duration” growth stocks where more value comes from later years. Durable research practice is to make the mechanism explicit and quantify it, rather than applying an arbitrary multiple cut.
A sound workflow is:
This keeps assumptions evidence-based, comparable across names, and transparent about uncertainty.
In an equity DCF, an analyst wants to show how the implied equity value range changes when key discount-rate and terminal-value assumptions vary. Which analysis feature best matches this purpose?
Best answer: B
Explanation: Varying WACC and terminal assumptions directly shows how DCF value ranges change under different discount-rate and terminal-value inputs.
A DCF sensitivity that flexes WACC and terminal assumptions (terminal growth rate or exit multiple) isolates the two inputs that most often drive the present value of cash flows and the terminal value. Presenting the results as a two-way table communicates a valuation range and how sensitive the implied equity value is to these assumptions.
DCF sensitivity analysis is used to communicate how valuation changes when key, uncertain assumptions move. Two of the highest-impact assumptions are the discount rate (WACC), which affects the present value of all forecast cash flows, and the terminal assumption (perpetual growth rate or exit multiple), which often drives a large portion of enterprise value through terminal value. A two-way sensitivity table varies WACC across a reasonable range on one axis and the terminal assumption across a reasonable range on the other, producing a grid of implied values. This lets the analyst interpret a valuation range (e.g., “base case” cell with upside/downside cells around it) and identify whether the conclusion is robust or highly assumption-dependent. Other valuation tools may create ranges, but they are not specifically designed to isolate WACC-versus-terminal assumption effects in a DCF.
You are building an assumption table for a subscription software company and want each major forecast input tied to evidence.
Exhibit (FY2025 actual; FY2026 company commentary):
| Item | Value |
|---|---|
| FY2025 revenue | $1,000 million |
| FY2025 ending subscribers | 2.0 million |
| FY2025 ARPU (revenue per subscriber) | $500 |
| FY2026 guidance (earnings call) | Ending subscribers up ~10% YoY |
| FY2026 pricing disclosure (10-Q) | 3% list-price increase effective Jan 1, FY2026 |
Assume FY2026 revenue is approximated by ending subscribers \(\times\) ARPU. Which assumption-table entry is most appropriate for FY2026 revenue (includes the correct implied calculation and the best evidence/source mapping)?
Best answer: D
Explanation: It applies subscriber and price growth multiplicatively using the cited earnings-call guidance and 10-Q disclosure.
A good assumption table ties each driver to the most direct primary evidence and shows the arithmetic that converts drivers into the forecast. Here, subscriber growth and ARPU growth both affect revenue, so the implied revenue growth is \(1.10 \times 1.03 - 1 = 13.3\%\), producing $1,133 million from $1,000 million.
An assumption table should (1) name the forecast input, (2) show how it is quantified, (3) cite the best supporting source, and (4) reconcile to the implied forecast result. In this setup, revenue is driven by subscribers and ARPU, so you should use management’s subscriber guidance from the earnings call and the price increase disclosed in the 10-Q, then translate those drivers into an implied revenue number.
\[ \begin{aligned} \text{FY2026 revenue} &= 1{,}000 \times 1.10 \times 1.03 \\ &= 1{,}133 \,\text{(million)} \\ \text{Implied growth} &= 1.10 \times 1.03 - 1 = 13.3\% \end{aligned} \]The key integrity check is using the right driver math (multiplicative compounding) and the most authoritative sources for each driver.
An analyst covers U.S. large-cap pharmaceutical companies. After an election, investors assign higher odds to legislation that would expand Medicare drug price negotiation starting in 2028. By midday, the Health Care sector index is down 2.8% while the S&P 500 is flat.
The analyst has already (1) summarized the proposal’s key provisions and timeline from public sources and (2) pulled the sector’s and peers’ relative price moves versus the market. What is the best next step to assess likely sector relative performance and update the valuation view?
Best answer: B
Explanation: Quantifying exposure and translating the policy catalyst into explicit model drivers enables a defensible view of relative impact versus the broader market.
A political catalyst affects sector relative performance through specific company and industry fundamentals (e.g., end-market exposure, pricing power, and timing). After capturing the event details and the market’s initial reaction, the next step is to quantify exposure and convert the catalyst into explicit forecast inputs. Scenario/sensitivity work ties the policy outcome to valuation in a way that can be compared across the sector and versus the broader market.
For macro/political events, the market’s first move is only a signal; an analyst’s job is to translate the catalyst into measurable drivers that explain why the sector should underperform or outperform the broader market. Here, the event is a higher probability of expanded Medicare price negotiation with a stated start date, so the workflow should move from “what happened” and “how did prices react” to “who is economically exposed and by how much.”
A practical next step is to:
This sequencing avoids anchoring on price action or peer multiples without a fundamentals-based bridge to earnings and cash flow.
An analyst is valuing DeltaCo using price-to-free-cash-flow (P/FCF), defined as market capitalization divided by levered free cash flow (FCF). All amounts are in USD.
Exhibit: DeltaCo (TTM)
| Item | Amount |
|---|---|
| Market capitalization | $5.0 billion |
| Reported FCF | $500 million |
| Includes a one-time working-capital inflow from stretching payables (non-recurring) | $150 million |
Peers trade at a median P/FCF of 12x. If the analyst uses DeltaCo’s reported FCF (unadjusted) to compute P/FCF and compare to peers, what is the most likely outcome?
Best answer: B
Explanation: Using a one-time working-capital inflow overstates FCF, understating P/FCF and making DeltaCo appear undervalued versus peers.
Reported FCF is inflated by a non-recurring working-capital inflow, so dividing market cap by that higher FCF produces an artificially low P/FCF. That makes DeltaCo appear to generate higher-quality, more sustainable cash than it really does. The relative valuation conclusion would therefore be biased toward undervaluation.
P/FCF compares equity value (market cap) to the company’s ability to generate cash available to equity holders. If reported FCF is temporarily boosted by a one-time working-capital inflow (for example, delaying payables), the denominator is overstated.
Using the exhibit:
Using the unadjusted 10x multiple versus a 12x peer median would incorrectly suggest DeltaCo is cheap and has strong cash generation quality, when normalization shows it screens more expensive.
You cover a small-cap specialty retailer with only a few active market makers and an average daily dollar volume under $5 million. On a day when broader equity volatility is elevated, the stock opens up 11% after reporting EPS and revenue roughly in line with consensus and reaffirming prior guidance. In the first 15 minutes, trading volume is only ~20% of the stock’s typical 15-minute open volume, and the bid-ask spread is ~2% versus a normal ~0.3%. With no new 8-K, transcript, or incremental news, what is the single best research conclusion about this price move for your catalyst note?
Best answer: D
Explanation: Low volume plus a sharply wider bid-ask spread in an illiquid name suggests order imbalance/noise rather than strong information-driven repricing.
In illiquid equities, price discovery can be dominated by trading frictions such as wide bid-ask spreads and temporary order imbalances, especially during high-volatility regimes. Because the company’s reported results and guidance were in line and there is no incremental information flow, the combination of low early volume and a much wider spread makes the opening jump a less reliable signal of a new fundamental valuation level.
Price discovery is strongest when an equity is liquid (tight spreads, deep order book, steady volume) and when material information is broadly and quickly disseminated. Here, the stock is structurally illiquid and, on a high-volatility day, the opening move occurs on unusually low volume and an abnormally wide bid-ask spread—conditions consistent with higher transaction costs and greater sensitivity to small trades.
When information flow is limited (no new filing, transcript, or guidance change), a large price change is more likely to reflect:
The appropriate analyst takeaway is to be cautious in interpreting the print as a clean fundamental repricing until liquidity/volume normalizes and incremental information is identified.
Use the Series 86 Practice Test page for the full Securities Prep route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Use the Series 86 Cheat Sheet on SecuritiesMastery.com when you want a compact review before returning to the FINRA Series 86 Practice Test page.