Browse Certification Practice Tests by Exam Family

Series 86: Information and Data Collection

Try 10 focused Series 86 questions on Information and Data Collection, with explanations, then continue with the full Securities Prep practice test.

Series 86 Information and Data Collection questions help you isolate one part of the FINRA outline before returning to a mixed practice test. The questions below are original Securities Prep practice items aligned to this topic and are not copied from any exam sponsor.

Open the matching Securities Prep practice route for timed mocks, topic drills, progress tracking, explanations, and the full question bank.

Topic snapshot

ItemDetail
ExamFINRA Series 86
Official topicFunction 1 — Information and Data Collection
Blueprint weighting21%
Questions on this page10

Sample questions

Question 1

An analyst wants to assess a packaged-food company’s pricing flexibility and its impact on demand and profitability. The analyst plans to use historical price and unit-volume data (controlling for promotions and mix) to estimate how the percentage change in quantity demanded responds to a percentage change in price.

Which analytical tool best matches this feature/function?

  • A. Total addressable market (TAM) sizing
  • B. Porter’s Five Forces framework
  • C. Price elasticity of demand analysis
  • D. Herfindahl-Hirschman Index (HHI) concentration measure

Best answer: C

Explanation: It quantifies how unit demand changes as price changes, directly informing pricing power and margin risk.

Estimating how unit volume responds to price changes is an application of price elasticity of demand. By quantifying sensitivity of demand to price, the analyst can judge whether the company can raise prices without a disproportionate volume decline, which is central to forecasting revenue and gross margin under competitive pressure.

Price elasticity of demand measures demand sensitivity to price and is commonly estimated from historical price and volume data while controlling for other factors (promotions, mix, seasonality). If demand is relatively inelastic in the relevant price range, the company generally has more pricing power and can pass through cost inflation with less volume loss, supporting profitability. If demand is elastic, price increases are more likely to reduce units enough to pressure revenue and margins, especially in a competitive category with close substitutes. This tool directly links competitive dynamics and customer behavior to pricing flexibility, which is why it is often used in industry and company analysis.

  • Industry structure lens Five Forces assesses competitive pressures but does not quantify volume response to price changes.
  • Market size focus TAM sizing estimates potential demand level, not sensitivity of demand to price.
  • Concentration proxy HHI summarizes market concentration and competition intensity, but it is not a direct pricing flexibility measure.

Question 2

You are updating a DCF-based valuation for a rate-sensitive U.S. homebuilder after the Fed (1) raises the target fed funds range by 50bp and (2) announces an accelerated pace of balance-sheet runoff (quantitative tightening). In the two sessions following the announcement, the 2-year Treasury yield rises 45bp and the 10-year rises 20bp.

Which approach best aligns with durable research standards when translating these monetary policy tools into assumptions about market liquidity and interest rates?

  • A. Anchor the risk-free curve to market-implied Treasury/OIS levels, explain that rate hikes raise overnight funding costs while QT withdraws reserves and can lift term premia, and present sensitivity cases for both the policy path and liquidity/term-premium uncertainty
  • B. Model QT as increasing market liquidity and lowering long-term yields because the Fed is reducing its balance sheet
  • C. Increase WACC by exactly 50bp because the Fed hike mechanically shifts all rates by the same amount
  • D. Leave discount rates unchanged until the effects appear in next quarter’s reported inflation and GDP data

Best answer: A

Explanation: It ties assumptions to observable rate moves, explains the liquidity-and-rate transmission of hikes and QT, and transparently frames uncertainty with scenarios.

A durable macro-to-model translation starts with observable market prices and a clear mechanism. Policy rate hikes primarily raise short-term funding costs, while QT tends to reduce system reserves/liquidity and can pressure longer-term yields via higher term premia. Using market-implied curves plus explicit sensitivity ranges makes the assumptions evidence-based and transparent.

Monetary policy tools affect interest rates through both the expected path of short-term rates and the amount of liquidity (reserves) in the financial system. A fed funds hike (implemented via tools like interest on reserves and open market operations) transmits quickly to overnight rates and short-dated yields, which is why anchoring the short end to market-implied OIS/Treasury levels is an evidence-based starting point. Quantitative tightening reduces the Fed’s balance sheet, generally shrinking reserves and market liquidity; that can widen risk premiums and/or lift the term premium embedded in longer-maturity yields, affecting discount rates beyond a one-for-one policy move. Because the market response and lags are uncertain, best practice is to explain the mechanism, align assumptions to observable curve moves, and present sensitivity cases rather than hard-coding a simplistic parallel shift. The key takeaway is to separate “policy-rate path” effects from “liquidity/term-premium” effects and disclose both.

  • Mechanical parallel shift ignores that hikes mainly reprice the short end and that longer yields also reflect term premia and risk premiums.
  • Wait for macro prints skips timely, market-based evidence and understates how quickly policy expectations move rates.
  • QT increases liquidity reverses the direction: balance-sheet runoff generally drains reserves rather than adding liquidity.

Question 3

When estimating an industry’s growth rate from a beginning market size ( \(V_{begin}\)) and ending market size (\(V_{end}\)) over \(n\) years, which expression correctly calculates the compound annual growth rate (CAGR)?

  • A. \(\left(\frac{V_{end}}{V_{begin}}\right)^{1/n}-1\)
  • B. \(\left(\frac{V_{end}}{V_{begin}}\right)^{1/(n-1)}-1\)
  • C. \(\left(\frac{V_{end}}{V_{begin}}-1\right)\div n\)
  • D. \(\frac{V_{end}-V_{begin}}{V_{begin}}\div n\)

Best answer: A

Explanation: CAGR is the geometric annualized growth rate over \(n\) years based on the ending-to-beginning value ratio.

CAGR is the constant annual growth rate that compounds from the beginning market size to the ending market size over \(n\) years. It uses a geometric (compounded) approach, so it is based on the ratio \(V_{end}/V_{begin}\) raised to the power \(1/n\), then subtracts 1 to convert to a rate.

When an analyst has two market size observations (start and end) and needs a single annual growth rate for sizing or forecasting, CAGR provides the compounded rate that links the two points. It is calculated from the ratio of ending to beginning value, annualized over the number of years.

\[ \begin{aligned} \text{CAGR} &= \left(\frac{V_{end}}{V_{begin}}\right)^{1/n}-1 \end{aligned} \]

Methods that divide total percent change by \(n\) produce an arithmetic average that ignores compounding and will generally differ from CAGR.

  • Arithmetic averaging dividing total percent change by \(n\) ignores compounding.
  • Wrong order of operations dividing \((V_{end}/V_{begin}-1)\) by \(n\) is not equivalent to annual compounding.
  • Off-by-one years using \(n-1\) misstates the compounding periods when the horizon is \(n\) years.

Question 4

An analyst is estimating the 2025 U.S. market size for residential solar inverters.

  • A third-party industry report shows end-user spending of $4.0 billion (includes distributor and installer markups).
  • U.S. Census data show manufacturer shipments of $3.2 billion (factory-gate value).

The analyst adds the two figures to estimate a $7.2 billion market and then uses that market size to assess the company’s current market share and “runway” for growth. What is the most likely outcome of this analytic choice?

  • A. Market growth will be unbiased because using two sources automatically normalizes definitions
  • B. Market size is understated because end-user spending excludes wholesale channel revenue
  • C. Market size is overstated due to double-counting, making market share look too low
  • D. The company’s market share will look too high because shipments omit installer margins

Best answer: C

Explanation: End-user spend and factory-gate shipments are different value-chain views of largely the same sales, so summing them inflates TAM and depresses implied share.

End-user spending and manufacturer shipments are not additive; they typically represent overlapping dollars at different points in the value chain. Summing them double-counts the same underlying units, inflating the estimated market size. An inflated market size will mechanically reduce the company’s implied current market share and can make projected penetration and growth look more achievable than they are.

When estimating industry market size, the key is to use a consistent definition and level of the value chain. “Manufacturer shipments” are usually measured at factory-gate (producer) prices, while “end-user spending” is measured at the customer level and embeds channel markups and sometimes installation/service components. If you add these two series, you are typically counting the same products twice—once at the producer level and again at the customer level.

In this scenario, the inflated TAM will mechanically lower the company’s implied market share (company revenue divided by TAM). That can lead to overly generous conclusions about remaining penetration/runway and can bias top-down revenue forecasts and valuation upward if the analyst anchors growth assumptions to that apparent whitespace. The fix is to pick one level (producer or end-user) and reconcile scope differences before calculating size and growth.

  • Understated TAM would require missing components (e.g., imports/services), not adding overlapping series.
  • “Two sources normalize definitions” is incorrect; differing scopes/pricing levels must be reconciled, not averaged by inclusion.
  • Share too high conflicts with arithmetic: a larger (inflated) denominator reduces implied share.

Question 5

You are reviewing a regression intended to explain monthly U.S. retail sales growth (ex-autos) using macro indicators.

Exhibit: OLS regression output (n = 60 months)

  • Dependent variable: Retail sales growth (%)
  • Independent variables:
  • 9Unemployment rate (percentage points)
  • Consumer confidence index (points)
VariableCoefficientp-value
Intercept0.200.30
9Unemployment rate-0.900.01
Consumer confidence0.080.04

Model fit: \(R^2 = 0.35\) (Adj. \(R^2 = 0.32\))

Which statement is INCORRECT?

  • A. An \(R^2\) of 0.35 implies about 35% of the variation in unemployment is explained by retail sales growth
  • B. The consumer confidence coefficient implies higher confidence is associated with higher retail sales growth
  • C. An \(R^2\) of 0.35 implies the model explains about 35% of the variation in retail sales growth
  • D. The negative unemployment coefficient implies retail sales growth tends to decline when unemployment rises

Best answer: A

Explanation: \(R^2\) describes the fraction of variation in the dependent variable (retail sales growth), not in an independent variable like unemployment.

In a standard OLS regression, coefficient signs indicate the direction of association with the dependent variable, and p-values indicate whether the relationship is statistically distinguishable from zero. \(R^2\) summarizes how much of the dependent variable’s variation is explained by the included regressors. It does not describe variation explained in an independent variable.

To interpret regression output conceptually, start by identifying the dependent variable, because both coefficients and \(R^2\) are defined relative to it. Here, the dependent variable is retail sales growth. A negative coefficient on \( 9\)unemployment means that, holding consumer confidence constant, increases in unemployment are associated with lower retail sales growth; the low p-value suggests this relationship is statistically significant in the sample. The positive coefficient on consumer confidence indicates higher confidence is associated with higher retail sales growth, also with a statistically significant p-value.

Model fit is summarized by \(R^2\): an \(R^2\) of 0.35 means the regressors jointly explain about 35% of the variation in retail sales growth, leaving the remainder attributable to factors not in the model and random noise. \(R^2\) should not be interpreted as explaining variation in unemployment or any other independent variable.

  • Sign interpretation: Interpreting the negative unemployment coefficient as a negative association with retail sales growth is consistent with the output.
  • Magnitude context: The positive consumer confidence coefficient supports an interpretation that higher confidence corresponds to higher retail sales growth, all else equal.
  • Fit statistic: Interpreting \(R^2\) as explained variation in the dependent variable is the correct use of the statistic.
  • Wrong variable for \(R^2\): Treating \(R^2\) as explained variation in an independent variable reverses what \(R^2\) measures.

Question 6

A research analyst is reviewing supply-side indicators for the U.S. containerboard (cardboard packaging) industry. All amounts are in USD.

Exhibit: Selected industry supply indicators

Indicator20242025E
Key input cost index (wood pulp; 2024 = 100)10078
Effective industry capacity (million tons)10.011.5
Environmental compliance cost per ton$12$12

Assuming other factors are unchanged, which interpretation is most directly supported by the exhibit?

  • A. The industry supply curve likely shifted left (reduced supply).
  • B. The industry supply curve likely shifted right (increased supply).
  • C. Quantity supplied rose due to a movement along the existing supply curve.
  • D. The industry demand curve likely shifted right (increased demand).

Best answer: B

Explanation: Lower input costs and higher capacity are classic drivers of an outward shift in the supply curve.

Supply shifts reflect changes in producers’ willingness/ability to supply at each price, driven by factors like input costs, technology, and capacity. The exhibit shows a large decline in a key input cost and an increase in effective capacity, with compliance costs flat. Those are supply-side improvements consistent with an outward (rightward) supply shift.

An industry supply curve shifts when producers’ costs or productive capacity change, altering how much the industry will supply at each possible price. In the exhibit, the wood pulp cost index falls materially (from 100 to 78), lowering variable costs, and effective capacity rises (10.0 to 11.5 million tons), increasing the maximum output the industry can produce. With compliance costs unchanged, the most direct implication is that, all else equal, suppliers can profitably produce more at any given price—i.e., supply shifts right. A movement along the supply curve would require a change in the market price (not provided), and demand shifts cannot be concluded from supply-only indicators.

  • Demand inference is not supported because the exhibit contains only supply-side cost/capacity data.
  • Leftward supply shift contradicts falling input costs and rising capacity.
  • Movement along supply requires a price change; the exhibit instead shows shift drivers (costs/capacity).

Question 7

You cover a mid-cap U.S. specialty retailer and are updating your near-term demand assumption after the latest monthly macro release. The only series you can access before the morning call is a non-seasonally adjusted (NSA) retail sales index; you have 3 years of history and need a 2-quarter view. Inflation has been cooling but the Fed is still holding rates high, so the PM wants to know if a sharp January drop signals a new downtrend.

Exhibit: NSA retail sales index (Dec and Jan)

PeriodDecJan
2022–2023132115
2023–2024136118
2024–2025140121

What is the single best analytic conclusion or modeling action that fits these constraints?

  • A. Treat the January drop as a trend break and cut forecasts
  • B. Use a 3-month average of NSA levels to remove seasonality
  • C. Attribute the decline to tight policy and raise the discount rate
  • D. Use YoY or seasonally adjusted changes to infer the trend

Best answer: D

Explanation: The repeated Dec-to-Jan drop is seasonal, so trend inference should rely on seasonal adjustment or same-month YoY comparisons.

The exhibit shows a consistent Dec-to-Jan decline each year, which is classic seasonality (post-holiday normalization) rather than clear evidence of a new downturn. To separate seasonal effects from underlying trend using limited NSA data, the most defensible approach is to look at seasonally adjusted MoM (if available) or same-month year-over-year changes (Jan vs prior Jan).

Seasonality is a recurring calendar pattern that can make month-to-month changes misleading, especially around holidays and weather-sensitive periods. Here, December is repeatedly higher than January in each year, so the January “drop” is not, by itself, evidence of a deteriorating macro trend.

Given the constraint that you only have an NSA series and need a near-term call, the cleanest way to isolate trend is to compare like-with-like calendar periods or use an explicitly seasonally adjusted series:

  • Prefer seasonally adjusted MoM (or SAAR) if accessible.
  • If not, use year-over-year comparisons for the same month (e.g., Jan vs prior Jan) or a 12-month smoothing measure.

The key takeaway is to avoid interpreting a predictable seasonal swing as a structural change in consumer demand.

  • Overreacting to MoM NSA confuses a recurring seasonal pattern with a new negative trend.
  • Short-window smoothing (like 3 months) still embeds holiday seasonality and can distort inference.
  • Policy attribution jump changes valuation inputs without first establishing that demand weakened beyond normal seasonality.

Question 8

You are building a like-for-like peer multiple comparison for two U.S. airlines. Company L operates mostly leased aircraft; Company O operates mostly owned aircraft.

Selected data (USD, last twelve months):

Company LCompany O
EBITDA$1.2B$1.2B
Aircraft rent expense (operating leases)$0.6B$0.1B
Net debt$5.0B$8.0B

Two analysts propose different approaches:

  • Analyst 1 compares valuation using EV/EBITDA.
  • Analyst 2 compares valuation using EV/EBITDAR and treats operating leases as debt-like (adding a lease liability to EV).

Which approach best fits a like-for-like comparison of these two airlines?

  • A. Use P/E because both firms have the same EBITDA level
  • B. Use EV/EBITDAR and add a lease liability to EV
  • C. Use price-to-sales because rent expense differences distort all profit metrics
  • D. Use EV/EBITDA because enterprise value already reflects leverage

Best answer: B

Explanation: Lease-heavy operators shift costs from depreciation/interest to rent, so EBITDAR and lease-adjusted EV improve comparability.

When one airline leases substantially more of its fleet, rent expense replaces depreciation and interest that would exist under ownership. That makes EV/EBITDA less comparable across the pair because EBITDA excludes rent. Using EV/EBITDAR (adding rent back) and treating lease obligations as debt-like in EV better aligns the operating metric and capital structure for peer comparison.

Like-for-like peer comparisons work best when the numerator and denominator reflect similar economics across companies. For airlines, fleet financing choices can materially change reported EBITDA: a lease-heavy airline shows higher rent expense (below EBITDA) and lower depreciation/interest, while an owner-heavy airline shows the opposite. Comparing EV/EBITDA can therefore penalize the lessee (lower EBITDA) even if underlying aircraft economics are similar.

A common normalization is to:

  • add rent back to EBITDA to get EBITDAR, and
  • adjust EV to include a debt-like value for operating leases (so leased and owned fleets are financed comparably).

This keeps the valuation multiple focused on operating performance rather than accounting/financing presentation differences.

  • EV already captures leverage fails because EBITDA still excludes rent, so the numerator/denominator mismatch remains.
  • P/E from equal EBITDA fails because net income depends heavily on depreciation, interest, and taxes, which differ with lease vs own.
  • Price-to-sales is not targeted to the decisive difference here and can ignore major cost-structure differences across airlines.

Question 9

You are initiating coverage on a U.S. value grocery chain. Your thesis is that it can offset accelerating labor costs by taking a 5%–6% average price increase over the next year while maintaining traffic because “food is non-discretionary.” Constraints: the chain serves price-sensitive households, competes locally with big-box discounters, and has limited product differentiation.

Which risk most directly challenges the pricing-power thesis based on customer demand drivers and elasticity?

  • A. One-time remodeling capex depresses near-term free cash flow
  • B. Unit volumes fall as customers trade down or switch stores
  • C. Short-term supply disruptions reduce on-shelf availability
  • D. Higher interest rates increase WACC and pressure valuation multiples

Best answer: B

Explanation: In a price-sensitive customer base with close substitutes, a price hike can trigger a disproportionate traffic/units decline, limiting revenue and margin lift.

The key question is whether customers will accept higher prices or change behavior. With low differentiation and nearby substitutes, demand is more elastic, so a 5%–6% price increase can lead to trade-down, basket shrink, and store switching that offset the intended margin benefit. That directly undermines the thesis that the company has sufficient pricing power.

Demand drivers for a discount grocer are typically price, convenience/location, and perceived value versus close substitutes. When customers are highly price sensitive and competing options are readily available, demand tends to be elastic: price increases can reduce traffic and units enough to negate expected revenue and gross margin gains. In this scenario, the company’s limited differentiation and direct competition with discounters increase the probability that shoppers respond by switching stores, buying fewer items, or shifting to private label/cheaper categories. That behavioral response is the most thesis-relevant limitation because the investment case depends on raising prices without losing enough volume to offset the higher price and any labor-cost relief.

The key takeaway is that incorrectly assuming “non-discretionary” demand implies low elasticity can lead to overstated pricing power.

  • Macro/multiple risk affects valuation, but it does not directly test whether customers will accept higher prices.
  • Supply disruption can hurt sales, but it is not the core limitation of a pricing-power thesis grounded in elasticity.
  • Capex timing impacts free cash flow, yet it is separate from whether price increases hold volume/traffic.

Question 10

You cover U.S. regulated electric utilities. A proposed EPA emissions rule would likely require additional controls or accelerated retirement for coal units starting in 2029. You have confirmed your covered company’s generation mix (about 40% coal) and identified which plants are in-scope, but you have not yet translated the regulation into your financial model.

What is the best next step in your workflow?

  • A. Wait for management to provide formal guidance
  • B. Immediately reduce the price target for higher regulatory risk
  • C. Replace forecast margins with the peer-group average
  • D. Assess cost recovery and size/timing of compliance spending

Best answer: D

Explanation: After confirming applicability, the next step is to map the rule to incremental capex/opex and whether/when it can be recovered in customer rates.

Once the analyst has verified the regulation applies to the company’s assets, the next step is to translate it into model drivers. For a regulated utility, that means estimating the incremental compliance spend and determining how much is recoverable through rates (and with what timing), which drives revenue requirement, cash flow, and valuation.

The core task is mapping a sector regulatory development to specific revenue and cost impacts using primary sources and the sector’s economic mechanics. After confirming plant-level applicability, quantify what the rule changes (incremental environmental capex, incremental opex, retirements/decommissioning timing) and then determine how those items flow through a regulated utility’s P&L and cash flows. In practice, that involves (1) pulling company disclosures on environmental capex/commitments and plant plans, and (2) checking state commission constructs that determine recovery (rate base treatment, riders/trackers, lag, and any disallowance risk). Only then should you refresh forecast line items (capex, depreciation, rate base growth, allowed return/revenue requirement, FCF) and document assumptions and sensitivities. A price-target change before sizing and recovery analysis is premature.

  • Premature conclusion cutting the price target skips estimating compliance magnitude and recoverability.
  • Wrong driver using peer-average margins ignores regulated cost pass-through mechanics.
  • Delay waiting for guidance can miss the initial market impact; filings and rate constructs are already available.

Continue with full practice

Use the Series 86 Practice Test page for the full Securities Prep route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Free review resource

Use the Series 86 Cheat Sheet on SecuritiesMastery.com when you want a compact review before returning to the FINRA Series 86 Practice Test page.

Revised on Sunday, May 3, 2026