Try 45 free SAFe Agilist questions across the exam domains, with answers and explanations, then continue in PM Mastery.
This free full-length SAFe Agilist practice exam includes 45 original PM Mastery questions across the exam domains.
The questions are original PM Mastery practice questions aligned to the exam outline. They are not official exam questions and are not copied from any exam sponsor.
Count note: this page uses a 45-question full-length practice format for the SAFe Agilist route. Always confirm current exam count, timing, passing score, and renewal rules directly with Scaled Agile.
Set a 90-minute timer and answer all 45 questions before reading explanations. Track misses by SAFe mindset, team/ART flow, PI planning, Lean Portfolio Management, or leading-change decision.
Use this page as a Leading SAFe diagnostic, not as the only measure of readiness. The most useful result is the pattern behind your misses.
| Result pattern | What it usually means | Next step |
|---|---|---|
| Strong score and misses are scattered | Your broad SAFe model may be stable. Review explanations and protect timing. | |
| Many mindset or values misses | Revisit Lean-Agile principles, core values, and leadership behavior. | |
| Many flow or ART misses | Drill PI Planning, ART backlog, customer-centricity, built-in quality, and delivery flow. | |
| Many LPM misses | Review portfolio vision, epics, strategy connection, portfolio flow, and economic prioritization. | |
| AI-related items feel generic | Tie AI use back to SAFe responsibilities, human oversight, and better decision quality. |
| Field | Record |
|---|---|
| Overall score | ___ / 45 questions |
| Timing result | Finished early / on time / rushed late |
| Highest-miss area | mindset / team agility / flow / LPM / leading change |
| Most expensive mistake type | wrong level of SAFe / weak flow logic / poor LPM distinction / leadership anti-pattern / other: ___ |
For concept review before or after this set, use the SAFe Agilist guide on PMExams.com.
This static page is useful for one diagnostic pass. PM Mastery is better for repeated practice because it gives you varied timed attempts, focused SAFe drills, explanations, and progress history instead of one page you can memorize.
| Checkpoint | Approximate time budget | What to do |
|---|---|---|
| Questions 1-15 | 30 minutes | Keep SAFe level and role context visible. |
| Questions 16-30 | 60 minutes cumulative | Watch for flow, ART, and portfolio trade-off traps. |
| Questions 31-45 | 90 minutes cumulative | Finish with enough time to review marked LPM and leadership items. |
If you retake this free diagnostic, treat the second attempt as a reasoning check rather than a fresh score. Give more weight to varied timed attempts in PM Mastery than to repeating one static page.
| Item | Detail |
|---|---|
| Issuer | Scaled Agile |
| Exam route | SAFe Agilist |
| Official exam name | AI-EMPOWERED SAFe Agilist (SA) (Leading SAFe) |
| Full-length set on this page | 45 questions |
| Exam time | 90 minutes |
| Topic areas represented | 6 |
| Topic | Approximate official weight | Questions used |
|---|---|---|
| Adapting and Thriving with SAFe | 8% | 4 |
| SAFe Mindset, Values, and Principles | 21% | 9 |
| Establishing Team and Technical Agility | 8% | 4 |
| Product Development Flow | 27% | 12 |
| Exploring Lean Portfolio Management (LPM) | 28% | 12 |
| Leading the Change | 8% | 4 |
Topic: SAFe Mindset, Values, and Principles
An ART has two upcoming features that each take about two iterations.
The train sequences Feature 1 before Feature 2 based on Cost of Delay. After Feature 1 is released, what is the best evidence/indicator that this sequencing decision is improving outcomes?
Best answer: D
What this tests: SAFe Mindset, Values, and Principles
Explanation: Cost of Delay is an economic measure, so the strongest validation is evidence that earlier delivery created earlier economic benefit. For a penalty-avoidance feature, that means tracking the actual penalty dollars avoided (or exposure reduced) once the feature is released. This directly confirms that sequencing by higher Cost of Delay improved outcomes.
Cost of Delay (CoD) describes the economic impact of waiting to deliver a capability (e.g., penalties, lost revenue, missed opportunity). When CoD drives sequencing, the right validation is outcome-based and economic: did delivering the higher-CoD item first reduce real economic loss sooner?
Good validation signals are objective and tied to the economic hypothesis, such as:
Activity outputs (more stories, higher velocity, better planning artifacts) may indicate progress, but they do not validate that CoD-based sequencing improved business outcomes.
Cost of Delay is economic, so validating sequencing should use realized economic impact (penalty avoided) from delivering earlier.
Topic: SAFe Mindset, Values, and Principles
An ART’s lead time is growing, and work sits in review/testing queues while defects increase. Leaders want to “go faster” in the next PI without weakening built-in quality or alignment. Which action best reflects the purpose of the House of Lean and its pillars to optimize value delivery and flow?
Best answer: A
What this tests: SAFe Mindset, Values, and Principles
Explanation: The House of Lean exists to achieve sustainable shortest lead time and best quality by combining respect for people and culture with flow, innovation, and relentless improvement. Limiting WIP and swarming reduces queues and improves flow, while team-led experiments create a learning culture that improves quality without adding heavy handoffs or unmanaged backlogs.
The House of Lean guides Lean-Agile leaders to optimize the whole system for value delivery: fastest sustainable flow of value with high quality. In the scenario, growing lead time and review/testing queues signal excessive WIP and delayed feedback, which increases defects and wait states.
Applying the pillars means:
Adding more work-in-process, more gates, or more pressure typically increases queues and delays learning, which worsens both flow and quality.
It improves flow by reducing queues, respects people through empowerment, and drives innovation and relentless improvement while preserving quality.
Topic: Adapting and Thriving with SAFe
A financial services company is losing customers to digital-first competitors. Market expectations and user behavior change quickly, but the Agile Release Train (ART) currently releases only at the end of each PI, and most decisions are based on upfront assumptions.
Which action best optimizes value delivery and flow while maintaining quality, alignment, and necessary guardrails in this digital environment?
Best answer: B
What this tests: Adapting and Thriving with SAFe
Explanation: In the digital age, uncertainty is high and change is rapid, so decisions based on assumptions decay quickly. Optimizing flow requires shortening the learning cycle with small batches and frequent feedback from real users and operational data. This supports adaptation while keeping alignment through ART objectives and quality through Built-In Quality practices.
The digital age amplifies volatility (new entrants, shifting customer expectations, and fast technology cycles), which increases the cost of waiting to learn. SAFe optimizes for rapid learning by reducing batch size and cycle time so assumptions can be tested early and plans can be adjusted without large rework.
Practically, this means:
Big-batch, late-integration, and heavy change-control approaches typically create queues and delayed feedback, making adaptation slower and riskier.
Frequent, small releases create fast feedback loops so the ART can learn and adapt quickly without sacrificing Built-In Quality.
Topic: Adapting and Thriving with SAFe
An organization is trying to improve overall Business Agility and has launched several Agile Release Trains. Senior leaders ask what they should do to enable the other business agility competencies (like technical agility, Agile product delivery, enterprise solution delivery, and Lean portfolio management).
Which leadership action is NOT aligned with SAFe in this context?
Best answer: A
What this tests: Adapting and Thriving with SAFe
Explanation: Lean-Agile Leadership enables other competencies by modeling the Lean-Agile mindset, creating an environment for learning, and empowering people to make local decisions within guardrails. Actions that increase centralized control and constrain teams’ ability to adapt tend to slow flow and weaken adoption across the organization. The misaligned choice is the one that reinforces command-and-control rather than enabling autonomy and continuous improvement.
Lean-Agile Leadership is the competency that activates the others because leaders shape the system: culture, structure, funding, and decision-making. In SAFe, leaders enable Business Agility by modeling desired behaviors, developing people, and creating clarity and guardrails so teams can move fast while staying aligned.
Leaders typically enable other competencies by:
A leadership approach that mandates detailed compliance and adds approval layers limits empowerment and learning, making it harder for ARTs and the portfolio to improve flow.
Command-and-control standardization and extra approvals reduce autonomy and learning, which undermines leaders’ role in enabling the other competencies.
Topic: Establishing Team and Technical Agility
On an Agile Release Train, two Agile Teams are delivering similar customer-facing features.
Team X completes stories only when analysis, coding, testing, and automation are finished within the team by swarming as needed. Team Y completes stories when coding is done, then hands work to a separate test/automation group.
Which characteristic most strongly indicates the higher-performing cross-functional Agile Team?
Best answer: C
What this tests: Establishing Team and Technical Agility
Explanation: High-performing cross-functional Agile Teams contain (or quickly access within the team) the skills needed to deliver a slice of value to done. They swarm and share ownership across disciplines to reduce handoffs and delays. This enables fast feedback, built-in quality, and more predictable delivery.
A high-performing cross-functional Agile Team is organized to deliver customer value end-to-end, not just a functional phase of work. In SAFe, that means the team collectively owns building, testing, and integrating their work to a clear Definition of Done, using collaboration (often swarming) and T-shaped skills to avoid handoffs that create queues and delay feedback.
When a team can finish stories with built-in quality inside the team boundary, it improves flow by:
By contrast, a “code-then-handoff” model signals functional silos, which is the opposite of cross-functional, high-performing team behavior.
A high-performing cross-functional team has the skills and shared ownership to finish work to done without relying on external functions.
Topic: Adapting and Thriving with SAFe
Midway through a PI, a new competitor release changes customer expectations and the ART must adapt quickly. The RTE wants objective evidence that the ART is learning fast enough to keep up with disruption in the digital age.
Which indicator best validates rapid learning and adaptation?
Best answer: B
What this tests: Adapting and Thriving with SAFe
Explanation: The digital age increases the rate of change, so organizations must shorten feedback loops and base decisions on real outcomes. Integrated System Demos combined with objective user telemetry show whether solutions work in the market and enable rapid backlog adjustments. This is stronger evidence of adaptation than activity counts or planning artifacts.
In the digital age, customer needs, competitors, and technology can shift quickly, so success depends on rapid learning cycles rather than long planning horizons. In SAFe, the most credible way to validate learning and adaptation is to use fast feedback loops that connect working, integrated solutions to measurable outcomes. Iteration System Demos provide inspectable evidence of progress, and telemetry (usage, conversion, defects, performance, lead time) validates whether a hypothesis is actually true in real user behavior. When that evidence directly drives backlog reprioritization and updated PI Objectives, it shows the ART can sense, learn, and respond to disruption quickly. In contrast, utilization, story points, and planning artifacts can look “green” while outcomes and learning lag.
Frequent integrated demos plus real usage data provide validated learning and enable quick pivoting based on evidence.
Topic: Leading the Change
During a problem-solving workshop, a senior leader opens by saying: “I may have contributed to this defect spike. This is a blameless session—speak freely. We’ll run a small experiment next iteration and inspect the results together.”
What is the most likely near-term impact of this leadership behavior?
Best answer: B
What this tests: Leading the Change
Explanation: By modeling blameless learning and inviting dissent, the leader increases psychological safety. That typically results in earlier surfacing of risks, defects, and systemic impediments, which the team can act on quickly. The near-term benefit is better problem discovery and faster quality improvement, not instant performance gains.
Psychological safety is created when leaders model fallibility, reduce fear of blame, and encourage experimentation and learning. In the scenario, the leader explicitly invites candid input and frames the session as learning-oriented, which makes it more likely that people will share “bad news” (hidden defects, brittle tests, unclear requirements, unrealistic commitments) early. That improved transparency supports better flow and quality quickly because the ART can identify root causes, select a small experiment, and validate it in the next iteration.
The key near-term consequence is better signal and faster corrective action; larger structural outcomes (like stabilized portfolio priorities) are indirect and take longer.
Psychological safety increases candor, so real problems are raised and addressed sooner, improving quality in the next iterations.
Topic: Product Development Flow
During a PI, an ART’s solution demo shows a new workflow packed with rarely used options. Usage analytics from the last release shows low adoption of similar “power-user” features, but teams kept adding more because “we’re already in the code.” As a SAFe Agilist, which action is NOT aligned with a customer-centric correction?
Best answer: B
What this tests: Product Development Flow
Explanation: Overbuilding is a signal to shift from output (more options) to outcomes (measurable user value). Customer-centric corrections in SAFe emphasize discovery, MVPs, and hypothesis-driven development to validate what users actually need. Continuing to add “nice-to-have” functionality because the team is already coding perpetuates waste and delays learning.
In SAFe, overbuilding is a form of waste that often appears as “gold plating” or adding capabilities without evidence they improve customer outcomes. A customer-centric correction is to shorten the learning loop: clarify the customer’s job-to-be-done, build the smallest viable slice, and measure real usage and outcomes to decide what to enhance next. Practices like customer discovery, MVPs, and hypothesis statements help teams pivot from building more to learning faster.
The misaligned choice is the one that treats completeness and sunk cost as decision drivers, which increases inventory and delays validated learning.
It reinforces overbuilding by optimizing for output and sunk cost instead of validating customer outcomes and adjusting scope.
Topic: Product Development Flow
An Agile Release Train runs a stable 2-week iteration cadence and integrates continuously. A Product Manager asks to “slow down to monthly iterations so we can release monthly,” but stakeholders also say they want the option to release a fix any time.
As the SAFe leader, what should you clarify first before recommending a change?
Best answer: B
What this tests: Product Development Flow
Explanation: Development cadence is the steady rhythm of planning and integrating work (for example, 2-week iterations), while release timing is when value is delivered to users and can be on demand. Before changing cadence, you first confirm what release timing the business actually needs and why. That clarifies whether the request is about value-driven release decisions versus altering the team’s development rhythm.
In SAFe, development cadence provides a predictable rhythm for planning, synchronization, integration, and learning (iterations, System Demos, PI events). Release timing is separate: with Release on Demand, you release when there is sufficient value and the solution is ready (often enabled by the Continuous Delivery Pipeline and appropriate controls).
In this scenario, the request to “slow down iterations to release monthly” mixes two different levers. The first clarification is the value-based release need: what event, customer outcome, or operational trigger defines when a release must happen. Once that’s clear, you can decide whether you simply adjust release policies or improve readiness, rather than changing the iteration cadence.
Release timing should be driven by value and readiness, not by changing the development cadence.
Topic: Product Development Flow
During an ART backlog prioritization workshop, a single Agile Team presents several features and provides the WSJF inputs. The team assigns very high user-business value and time criticality scores, and their features rise to the top of the ART backlog. Business Owners were not in the session, and other teams suspect WSJF is being “gamed.”
What should you verify or obtain first before changing the order of the backlog?
Best answer: D
What this tests: Product Development Flow
Explanation: WSJF prioritization depends on credible Cost of Delay inputs (user-business value, time criticality, and risk reduction/opportunity enablement). When the delivery team self-scores those inputs without Business Owner participation, the numbers can be biased and the ranking becomes unreliable. The first corrective step is to validate the Cost of Delay with the appropriate business stakeholders.
WSJF sequences work by comparing economic impact (Cost of Delay) against job size. A common misuse is letting a team that benefits from being “next” also set (or heavily influence) the Cost of Delay inputs without Business Owner/customer validation; this biases user-business value and time criticality and can make any item appear urgent.
Before reordering the ART backlog, re-establish proper WSJF hygiene by:
Dependencies, acceptance criteria, and capacity matter for planning and feasibility, but they do not correct biased economic assumptions that skew WSJF ranking.
WSJF is easily gamed when the people who benefit from priority set Cost of Delay without business validation.
Topic: Exploring Lean Portfolio Management (LPM)
A Portfolio Manager uses an AI assistant to draft a new portfolio epic: “Real-time personalized offers using customer behavior.” The AI recommends fast-tracking it into the next PI.
Constraints:
What is the BEST next action?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: When AI-suggested work conflicts with portfolio guardrails or strategy, treat the output as a hypothesis, not a decision. First validate the assumptions and any data/PII implications with the right SMEs, then use the Portfolio Kanban to make the tradeoffs visible. Escalate to LPM for the budget/WIP decision in time for PI Planning.
In SAFe LPM, AI can accelerate analysis and options, but humans remain accountable for decisions and compliance. Here, the AI-proposed epic creates explicit conflicts: it may trigger restricted PII use (guardrail), it implies funding outside the capped theme budget (strategy/guardrails), and it would increase work beyond the epic WIP limit (flow). The best next action is to validate the AI output (assumptions, sources, and PII/legal/security impact) and then route the item through the Portfolio Kanban so prioritization and tradeoffs are made transparently. LPM (and the appropriate governance/SMEs) can then decide whether to stop/defer another epic to free WIP, adjust scope to fit guardrails, or reject the epic—before ARTs commit during PI Planning. The key takeaway is to adjust inputs and validate, then escalate for portfolio-level decisions rather than bypassing guardrails to move faster.
AI output must be validated against guardrails and strategy, then escalated to LPM to decide WIP/budget tradeoffs via the Portfolio Kanban.
Topic: Product Development Flow
During PI Planning, a Product Manager proposes an experiment to improve new-user onboarding on the ART. The current “problem statement” is: “Users are confused by onboarding,” and the “success metric” is: “Increase engagement.” Before teams size or commit to this work, what is the most appropriate next step?
Best answer: A
What this tests: Product Development Flow
Explanation: To run a meaningful experiment, the ART needs a problem statement that is specific about who has what problem and why it matters, and a success metric that is observable and decision-driving. “Increase engagement” is too broad and can become a vanity metric. Tightening the problem framing and defining a measurable target creates alignment before committing capacity in the PI.
In Design Thinking and hypothesis-driven development, an experiment is only useful when it tests a clearly framed problem and has an objective way to determine whether learning indicates “continue, pivot, or stop.” In the stem, both the problem statement and metric are vague, which makes estimation and PI commitment risky because teams cannot align on what “done” or “successful” means.
A good next step is to refine:
Once those are defined, the ART can size the work and decide how to incorporate it into the PI plan.
A good experiment starts with a specific user problem framing and a quantitative, time-bounded success threshold that enables a clear learn/kill/pivot decision.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio epic was approved and received a guardrail-based budget to run an MVP. After two PIs, the MVP is in production, but the agreed success metrics show low adoption and no measurable business outcome. The Epic Owner is requesting additional funding to “finish the roadmap.”
What is the most SAFe-aligned next step?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: In SAFe, epics are funded and governed through Lean Portfolio Management using Lean Budgeting guardrails and evidence from MVP success criteria. When the epic is not delivering expected value, the next step is to inspect the hypothesis and actual outcomes and make an explicit pivot, persevere, or stop decision before changing funding. This preserves portfolio flow and focuses spend on value.
A portfolio epic is managed through the Portfolio Kanban with a clear hypothesis and MVP success criteria. When the MVP is live and the expected value is not materializing, SAFe treats this as a learning and economic decision, not a delivery-completion decision. The Epic Owner and LPM should review the actual outcomes against the success metrics and then choose the appropriate response:
Only after that decision should additional funding or scope be considered, consistent with Lean Budgeting guardrails and portfolio flow.
LPM uses the epic’s hypothesis and MVP metrics to decide whether to pivot, persevere, or stop before allocating more funding.
Topic: Product Development Flow
In PI Planning, which SAFe term describes the 1–5 team-and-stakeholder rating used to surface low plan feasibility (often due to overcommitment) and trigger re-planning by adjusting scope, capacity, or sequencing?
Best answer: A
What this tests: Product Development Flow
Explanation: The confidence vote is the PI Planning check that asks teams and key stakeholders to rate how achievable the plan is. Low scores are a clear indicator of overcommitment or unresolved constraints. The appropriate response is to re-plan by negotiating scope, capacity, and sequencing until confidence improves.
The confidence vote is a built-in feedback mechanism in PI Planning to confirm that the PI plan is realistic. Teams (and typically Business Owners/stakeholders) vote from 1 to 5 on confidence in meeting the planned PI Objectives. When confidence is low, it’s a strong sign the train is overcommitted, has unresolved dependencies, or has insufficient capacity. The Lean-Agile response is to immediately re-plan: reduce or split scope, move lower-value work later, adjust sequencing to remove bottlenecks, and renegotiate objectives and dependencies until the plan is achievable. The goal is not to “push harder,” but to create a feasible plan with clear commitments.
A low confidence vote signals the plan isn’t achievable and prompts teams to re-plan by adjusting scope, capacity, and sequencing.
Topic: Exploring Lean Portfolio Management (LPM)
A company practicing Lean Portfolio Management holds a recurring meeting with portfolio stakeholders to review the Portfolio Kanban, visualize progress of epics, surface cross-value-stream dependencies and impediments, and adjust priorities to keep funding and execution aligned.
Which SAFe concept best matches this practice and its purpose?
Best answer: C
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: This describes the Portfolio Sync, an LPM event that enables ongoing alignment across strategy, funding, and execution. By reviewing the Portfolio Kanban and addressing dependencies and impediments, stakeholders actively manage portfolio flow. Adjusting priorities in this forum helps ensure work-in-progress remains focused on the highest-value outcomes.
Portfolio Sync is a recurring portfolio-level alignment event in Lean Portfolio Management. Its purpose is to keep strategy, funding, and execution continuously aligned while improving flow through the portfolio system. In practice, stakeholders use it to review the Portfolio Kanban (including epic progress and WIP), discuss key dependencies and impediments that span value streams, and make timely decisions such as reordering priorities or adjusting capacity allocation.
This directly supports portfolio flow by:
In contrast, cadence-based syncs at the ART or team level focus on execution within a train rather than across the full portfolio.
It is the LPM event used to maintain alignment and manage flow across the portfolio by reviewing work, impediments, and priorities.
Topic: Establishing Team and Technical Agility
An ART is improving flow in its Continuous Delivery Pipeline and agrees to use common flow metrics across teams. Which statement is INCORRECT?
Best answer: C
What this tests: Establishing Team and Technical Agility
Explanation: Throughput is a flow rate, expressed as how many work items are completed per unit of time. Lead time and cycle time are duration measures, expressed as elapsed time across defined start and end points. Confusing throughput with an average duration mixes rate and time measures and leads to incorrect flow conclusions.
SAFe uses basic flow measures to understand and improve delivery performance across the Continuous Delivery Pipeline. Lead time and cycle time are both elapsed-time measures, but they differ by where timing starts: lead time typically begins at the request (or commitment) and includes waiting, while cycle time begins when work starts and measures how long it takes to finish. Throughput is different: it is a rate that counts how many items are completed in a given period (for example, features per PI iteration or stories per week). Treating throughput as “time per item” is a category error; time-per-item is derived from time measures (and system behavior), not what throughput directly represents.
Throughput is a rate (items/time), while average time per item is a time-based measure such as cycle time (or lead time, depending on start point).
Topic: Leading the Change
A product group wants to “start SAFe” by scheduling PI Planning for a new ART next month. Teams have Scrum training and draft backlogs, but senior leaders are split on priorities and several managers question why the change is needed.
As the change lead, what is the most appropriate next step?
Best answer: C
What this tests: Leading the Change
Explanation: Before launching ART execution events, SAFe emphasizes establishing a clear reason for change and aligning leaders on priorities and desired outcomes. With leadership split and urgency unclear, the next step is to build shared understanding and commitment so the ART launch has stable sponsorship and direction.
Early SAFe implementation depends on leadership alignment and a compelling case for change. If leaders are not aligned on priorities or do not feel the urgency, starting PI Planning or ART-level ceremonies typically produces local alignment but weak organizational commitment, leading to churn and undermined adoption. The practical next step is to align leaders on why the change is needed, what business outcomes are expected, and how they will support the first ART launch (e.g., sponsorship, decision-making, and removing impediments). Once urgency and alignment are in place, the organization can train and empower change agents and proceed with preparing and launching the ART.
SAFe implementation starts by aligning leaders on the reason for change and building urgency before launching ART events like PI Planning.
Topic: Product Development Flow
During PI Planning, the ART focuses on feature commitments and skips the architecture breakout/discussion. Teams assume the current platform and APIs are “good enough” and finalize plans without validating cross-team architectural dependencies with the System Architect/Engineering.
What is the most likely near-term impact?
Best answer: B
What this tests: Product Development Flow
Explanation: PI Planning is where architecture considerations are surfaced early so teams can plan enablers, manage dependencies, and make realistic PI Objectives. When the ART skips this alignment, architectural gaps tend to appear as surprises once implementation starts. The result is near-term replanning, unplanned enabler work, and reduced predictability of delivery.
In SAFe, PI Planning is a key mechanism to surface and address architecture concerns before commitments are locked in. System Architect/Engineering collaborates with teams to identify architectural dependencies, NFR impacts, and needed enablers (often supported by the Architecture Runway). If those considerations are skipped, teams commonly discover missing interfaces, platform changes, or shared-component constraints during early iterations.
That drives:
The near-term consequence is reduced flow and predictability, not a clean “faster delivery” outcome.
Missed architectural dependencies typically surface as emergent enablers and rework early in the PI, reducing plan reliability.
Topic: Product Development Flow
Midway through a PI, an executive tells the Product Manager: “This feature must be done by Friday.” No other context is given. Before the ART makes any commitment or replans, what should the Product Manager clarify first to distinguish ART cadence from a simple deadline?
Best answer: B
What this tests: Product Development Flow
Explanation: In SAFe, cadence means planning and executing in fixed timeboxes (iterations and PIs) and making commitments based on capacity and the current plan. When someone states a date without context, the first step is to anchor the request to the ART’s timeboxes and capacity so you can negotiate scope, sequencing, or trade-offs appropriately.
Cadence in SAFe is the predictable rhythm of timeboxed planning and execution (iterations, PI boundaries, system demos), which enables reliable forecasting and objective progress. A stated “deadline” is just a date until it’s translated into the ART’s existing timeboxes and capacity constraints.
The first information to obtain is where the request belongs in the cadence:
Once the request is anchored to cadence, the ART can make an economic trade-off (e.g., adjust scope, swap priorities, split the work, or defer) instead of reacting to an arbitrary date.
Cadence-based commitments start by anchoring the request to PI/iteration timeboxes and available capacity, then negotiating scope or sequencing.
Topic: Leading the Change
An ART is adopting new ways of working, and engagement is dropping because teams feel the change is being “announced at them.” The RTE asks a senior leader to use AI to help communicate a clear vision and reinforce Lean-Agile leadership behaviors. The leader must also avoid sharing sensitive customer data with the AI tool.
Which approach is most SAFe-aligned?
Best answer: C
What this tests: Leading the Change
Explanation: SAFe expects leaders to communicate a compelling vision while supporting change through listening, coaching, and transparency. AI can help scale consistent messaging (talking points, FAQs) and prepare leaders to address concerns, but it should not replace authentic engagement. Keeping humans accountable and avoiding sensitive data exposure reinforces responsible, Lean-Agile leadership behaviors.
In SAFe, leading change is not a broadcast; leaders model Lean-Agile behaviors by engaging people, inviting feedback, and reinforcing shared purpose. Using AI is most effective when it augments these behaviors—helping leaders craft clear, consistent messages, anticipate questions, and support interactive communication—while the leader remains the accountable decision-maker.
A SAFe-aligned, responsible AI use pattern is:
The key discriminator is whether AI is used to strengthen transparency and collaboration versus enforcing top-down control.
This uses AI to amplify clarity and consistency while the leader stays accountable, invites feedback, and models transparency and respect.
Topic: Exploring Lean Portfolio Management (LPM)
A company is reorganizing to improve funding and oversight. Leaders want a construct that can fund multiple development value streams with a Lean budget, apply lightweight governance, and align investments to strategic themes.
Which option best describes this construct in SAFe terms?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: In SAFe, the portfolio is the construct used to align strategy to execution by funding and governing development across one or more value streams. The key discriminator is portfolio-level Lean budgeting and investment governance, not team- or train-level coordination. That makes the value-stream-and-budget framing the best match.
A SAFe portfolio is the organizational construct that connects enterprise strategy to execution by managing investment and governance across one or more value streams. It provides Lean budgets (rather than project-by-project funding) and applies lightweight portfolio governance to ensure spending aligns to strategic themes and intended outcomes. Teams, ARTs, and Solution Trains focus on delivering value (stories, features, capabilities) and synchronizing execution, but they are not the primary construct for setting investment guardrails and governing spend across multiple value streams. The deciding factor in the scenario is the need to fund and govern multiple value streams under a Lean budget.
In SAFe, a portfolio is the level that aligns strategy and investment across one or more value streams using Lean budgets and portfolio governance.
Topic: Product Development Flow
A portfolio epic aims to “reduce account takeover by 30%.” As part of that epic, a capability called “modernized authentication” spans two ARTs over multiple PIs. In the current PI, one ART committed to deliver the feature “passwordless login.”
At the end of the PI, which evidence best validates that the ART’s feature is actually delivered (rather than just activity completed)?
Best answer: D
What this tests: Product Development Flow
Explanation: Features are ART-level deliverables intended to provide user value within a PI, so the strongest validation is integrated, working software. A System Demo provides objective, observable evidence that the feature works end-to-end and meets its acceptance criteria. This distinguishes feature-level outcomes from story-level completion and from epic/capability planning artifacts.
In SAFe, the work item types differ by scope and intent, and the best validation depends on the level. A portfolio epic is a large investment hypothesis validated by outcome metrics (often through MVPs). A capability is a larger solution behavior that may span multiple ARTs and PIs. A feature is an ART-level service or behavior that is typically deliverable within a PI. A story is a small slice of work completed within an iteration.
Because a feature represents integrated value across teams on the ART, the most credible evidence of delivery is to see it working in the integrated system (System Demo) against feature acceptance criteria. Closing stories or reporting velocity indicates activity/output, but it does not validate that the feature works end-to-end or is usable.
A feature is validated by integrated, working functionality demonstrated at the ART level, not by plan or output metrics.
Topic: Establishing Team and Technical Agility
Midway through a PI, an ART’s System Demo repeatedly fails because teams complete stories but the integrated solution breaks in end-to-end testing. Teams also need sporadic help from security and performance testing specialists, but engagement is inconsistent. As the RTE, what is the most appropriate next step to improve integration, testing, and release readiness across the ART?
Best answer: B
What this tests: Establishing Team and Technical Agility
Explanation: In SAFe, the System Team exists to make frequent integration, system-level testing, and demo/release readiness possible for the whole ART. Shared Services provide specialized skills (e.g., security, performance) across multiple teams so that integration and compliance/testing needs are addressed continuously rather than ad hoc. Coordinating both directly targets the ART’s failing System Demo and release readiness issues.
The core issue is not that teams aren’t finishing stories; it’s that the ART lacks reliable end-to-end integration and system-level validation. The System Team supports the ART by providing and evolving the integration and test environments, continuous integration practices, and system-level testing needed to keep the Solution Integrated and always demoable. Shared Services are the cross-cutting specialists (such as security and performance testing) who collaborate with multiple teams to ensure those concerns are addressed during development, not deferred.
A practical next step is to align the ART around a consistent integration/test approach:
This improves system demo stability and release readiness without deferring quality to the end of the PI.
The System Team enables frequent integration and system-level testing, while Shared Services provide specialized help across teams to improve end-to-end readiness.
Topic: Exploring Lean Portfolio Management (LPM)
In the portfolio Kanban, a proposal to launch a new customer self-service claims portal is submitted with clear goals to reduce call-center volume and improve customer satisfaction. Because it requires significant API and platform work, leadership classifies it as an enabler epic and assigns it to an architecture group, asking for a technical plan rather than a benefit hypothesis and MVP.
What is the most likely near-term impact?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: A business epic should be managed and measured by intended business outcomes, using a benefit hypothesis and MVP to validate value. Treating it as an enabler epic shifts attention to technical delivery and makes it harder to align decisions to customer and business results in the near term.
Business epics are intended to deliver direct business outcomes (e.g., revenue, cost reduction, customer satisfaction) and should be framed with a benefit hypothesis, leading indicators, and an MVP for validation. Enabler epics primarily build capabilities—architecture, infrastructure, compliance, or exploration—that support other value delivery, and their success is often measured by how they remove constraints, reduce risk, or improve flow.
When a business epic is mislabeled as an enabler, the near-term consequence is that the work is managed as technical output rather than validated customer value. This reduces alignment to strategic outcomes, weakens prioritization signals, and increases the chance of building the “right” technology for the wrong (or unvalidated) problem.
Misclassifying a business epic as an enabler shifts focus to technical output, weakening the benefit hypothesis/MVP and reducing near-term alignment to measurable business outcomes.
Topic: SAFe Mindset, Values, and Principles
An Agile Release Train has missed several PI objectives in a row. WIP is growing, predictability is dropping, and teams report significant rework from late discovery of customer needs and technical constraints.
In response, Product Management has removed the Innovation and Planning iteration and expects teams to use 100% of capacity on feature delivery; experiments and spikes are frequently rejected as “not committed work.”
What is the most likely underlying cause in SAFe terms?
Best answer: B
What this tests: SAFe Mindset, Values, and Principles
Explanation: The symptoms point to slow feedback and late learning: discoveries happen after work is already in progress, creating rework and unreliable delivery. In SAFe, innovation is supported by exploration time, empowerment to run small experiments, and fast learning cycles. Removing that capacity and pushing 100% utilization creates a feature-factory dynamic that undermines outcomes and predictability.
In SAFe, innovation and better outcomes come from allocating time for exploration, empowering teams to test assumptions, and learning quickly with short feedback cycles. The stem’s key clue is the decision to remove the Innovation and Planning iteration and reject spikes/experiments in favor of 100% feature utilization. That behavior delays discovery of customer needs and technical constraints until late, which increases rework, grows WIP, and reduces predictability—ultimately leading to missed PI objectives.
A practical SAFe-minded correction is to restore capacity for learning (e.g., innovation time, enablers, spikes), enable teams to run small experiments, and use frequent integration and feedback (system demos) to learn earlier. The takeaway: optimizing for utilization reduces flow and learning, which harms delivery and innovation.
Eliminating exploration time and discouraging experiments prevents fast learning, causing late discovery, rework, and missed objectives.
Topic: Product Development Flow
An ART has completed a feature and successfully deployed it to production behind a feature toggle. A marketing leader asks you to send an email today announcing, “The new capability is now available to all customers.”
Before agreeing to the announcement, what is the most important question to clarify first?
Best answer: D
What this tests: Product Development Flow
Explanation: In Release on Demand, deploying to production does not automatically mean the capability is released to users. Marketing and release communication must align to the business release decision and the actual availability state (for example, which customers have the toggle enabled). Clarifying that release scope prevents announcing functionality that customers cannot yet access.
SAFe distinguishes technical deployment readiness (can we safely deploy?) from release communication (should we tell users it’s available?). With Release on Demand, teams often deploy capabilities to production earlier and control exposure with mechanisms like feature toggles, limited rollouts, or configuration.
Before sending customer-facing communication, first confirm the release decision and availability:
Technical readiness and marketing assets matter, but they are secondary to the fundamental question: is the capability truly available to the audience you intend to notify?
Release communication should reflect actual customer availability, which may differ from technical deployment when using toggles.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio leadership team is using OKRs to connect strategy to execution in Lean Portfolio Management. They want OKRs that express measurable outcomes and help align ARTs and solutions to the portfolio’s strategic themes.
Which OKR statement is NOT aligned with SAFe guidance for using OKRs in this context?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: In SAFe, OKRs are used to express desired outcomes and create alignment to strategic themes, favoring measures of value and behavior change over lists of deliverables. A key result should be a measurable outcome (customer, business, or operational), not an activity or scope commitment. The microservices example is primarily a delivery output, so it does not serve the intent of OKRs for strategic alignment.
In Lean Portfolio Management, OKRs help connect enterprise strategy (often expressed through strategic themes) to what ARTs and solution trains pursue by stating clear outcomes and how success will be measured. Well-formed OKRs separate the intent (Objective) from measurable evidence of progress (Key Results).
Key results should describe observable changes in outcomes such as customer experience, revenue, risk, quality, or operational performance (e.g., cycle time, reliability). They should avoid being a disguised plan or scope promise (features shipped, components built), because outputs do not necessarily demonstrate that the strategic outcome was achieved. A useful check is whether the key result could be met while delivering little or no customer/business value; if so, it is likely output-based rather than outcome-based.
This describes an output/deliverable rather than a measurable business outcome aligned to strategic themes.
Topic: Product Development Flow
Late in a PI, a critical defect is found during final validation of a release that is scheduled for tomorrow. Fixing it may require a short delay, and stakeholders are pressuring the ART to “release anyway.”
Before deciding whether to release, delay, or apply a workaround, what is the FIRST thing you should verify or obtain?
Best answer: C
What this tests: Product Development Flow
Explanation: In Release on Demand, the decision to ship with a known defect must be grounded in customer impact and whether the solution still meets agreed release acceptance criteria (including critical NFRs). This information enables an economic trade-off conversation (risk vs. cost of delay) and preserves Built-in Quality.
When a serious defect is discovered late, SAFe biases toward Built-in Quality and fast feedback, not shipping “at all costs.” The first step is to clarify whether releasing would violate explicit release acceptance criteria (including critical NFRs such as security, safety, reliability, or compliance) and what the customer/business impact would be if it escapes.
With that information, leaders can make an economic decision (e.g., delay briefly to fix, release a reduced scope, or apply a safe workaround/feature toggle) while keeping accountability for quality and risk. Without clear impact and acceptance criteria, the discussion becomes opinion-based and often leads to avoidable production incidents.
You need the defect’s business/customer impact and acceptance-criteria implications to make an economic, risk-based release decision.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio is using a portfolio canvas to align leaders around the portfolio vision for its value streams. Which statement is NOT a purpose of the portfolio canvas in SAFe?
Best answer: B
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: The portfolio canvas is a lightweight, visual way to communicate and align on the portfolio’s intent—who it serves, what value streams matter, and what outcomes are sought. It supports shared understanding and decision-making across portfolio stakeholders. It does not take over the role of flow-based execution controls used to manage and approve epics.
In SAFe Lean Portfolio Management, the portfolio canvas is used to make the portfolio vision visible and easy to discuss. It helps leaders and stakeholders communicate the portfolio’s intent—such as the customers served, the value streams that deliver value, and the outcomes and strategic direction the portfolio is pursuing. This shared view supports alignment, prioritization conversations, and consistent decision-making across ARTs and solution trains.
The portfolio canvas is not an execution governance mechanism; it complements (rather than replaces) flow systems like the portfolio Kanban, which manages epic discovery, analysis, and approval. The key takeaway is that the portfolio canvas is for communicating vision and alignment, while Kanban systems are for managing the work.
The portfolio canvas helps communicate and align, while the portfolio Kanban is the mechanism used to capture, analyze, and approve epics.
Topic: Leading the Change
In a SAFe transformation, which option best defines coaching as a change lever?
Best answer: C
What this tests: Leading the Change
Explanation: Coaching is the lever focused on changing day-to-day behaviors and skills through guided practice. It happens in the context of real work and relies on observation, feedback, and reinforcement. This differs from training (knowledge transfer) and structural change (changing the system people work within).
Coaching in a SAFe transformation is a practical, people-centered lever used to help individuals, teams, and leaders adopt Lean-Agile behaviors. It is typically delivered by experienced coaches and change agents who observe how work is being done, provide feedback, model practices, and help remove local impediments so the new ways of working “stick.”
By contrast, training primarily transfers knowledge (concepts, roles, and practices) and is often classroom-based, while structural change alters the environment in which people work (e.g., organizing around value streams, adjusting governance, funding, decision rights, or incentives). The key distinction is that coaching targets behavior change through guided application, not just information or org redesign.
Coaching develops capabilities by supporting people in real work, reinforcing desired Lean-Agile behaviors with timely feedback.
Topic: Establishing Team and Technical Agility
Which DevOps practice is defined as keeping work in small batches and merging changes frequently into a shared code line to minimize merge conflicts and keep the delivery pipeline flowing?
Best answer: C
What this tests: Establishing Team and Technical Agility
Explanation: Continuous Integration emphasizes small, frequent merges into a shared code line, typically backed by automated builds and tests. This reduces merge risk by shrinking the scope of each integration and exposes integration problems early, which improves flow through the pipeline.
Continuous Integration (CI) is a core DevOps engineering practice where developers integrate their code changes frequently—ideally many times per day—into a shared baseline. By working in small batches, each merge contains fewer changes, making conflicts and integration defects less likely and easier to diagnose. CI is commonly supported by automated builds and fast test feedback so integration problems are detected quickly, reducing rework and keeping work items moving smoothly through the Continuous Delivery Pipeline. The key distinction is that CI focuses on frequent integration and verification, not on automatically releasing to users.
CI integrates small changes frequently into a shared baseline to reduce merge risk and enable smoother flow through build and test.
Topic: Product Development Flow
An Agile Release Train has missed multiple PI objectives in a row. WIP keeps growing, predictability is poor, teams report frequent rework, and stakeholders complain about misalignment.
At the end of each PI, the ART holds an “Inspect and Adapt” that consists of a System Demo and a review of PI objectives, but they skip the quantitative measurement review and do not run the problem-solving workshop or create an improvement backlog.
What is the most likely underlying cause (in SAFe terms) of these issues persisting PI after PI?
Best answer: A
What this tests: Product Development Flow
Explanation: Inspect and Adapt supports learning at the end of a PI by combining evidence (metrics and results) with structured root-cause problem solving to create actionable improvements. In this scenario, the ART is only inspecting outcomes via demos and objective reviews, but not adapting based on data and root causes. That breaks the feedback loop, so the same systemic problems repeat each PI.
Inspect and Adapt is the ART’s end-of-PI learning event that turns results into actionable change. It’s more than a demo: it closes the feedback loop by looking at outcomes and flow/quality evidence, then running a structured problem-solving workshop to identify root causes and agree on improvement items.
When the ART skips the quantitative measurement review and the problem-solving workshop, they may observe missed objectives and misalignment but won’t diagnose why it happened or create owned improvement work for the next PI. The result is predictable: recurring WIP growth, rework, and poor predictability because systemic constraints are never removed.
A System Demo alone inspects the integrated solution, but it does not ensure learning and adaptation at the program level.
Without quantitative review and structured problem-solving, systemic issues aren’t identified, owned, and addressed in the next PI.
Topic: Product Development Flow
A team on an Agile Release Train has completed a feature, passed automated tests, and can safely push it to production behind a feature toggle. Marketing asks to delay the public announcement and customer enablement until a conference next month.
Which SAFe concept best matches this situation?
Best answer: C
What this tests: Product Development Flow
Explanation: This is an example of Release on Demand: the feature can be technically deployed when it is safe and validated, while the business can choose a later moment for the market-facing release. Marketing activities such as announcements and enablement are part of release communication, not deployment readiness.
In SAFe’s Release on Demand, a solution can be deployed to production when it meets technical deployment readiness (e.g., validated, monitored, reversible, and often controlled with feature toggles). The business-facing release is separate: it includes marketing communications, customer training, and change management, and can be scheduled for the best market timing. In the scenario, the team can deploy safely now, while marketing intentionally delays the public launch until a conference. The key idea is to decouple deployment from release to improve flow while still supporting coordinated go-to-market needs.
SAFe separates technical deployment readiness from market-facing release communication so value can be deployed earlier and released when the business chooses.
Topic: SAFe Mindset, Values, and Principles
Mid-PI, the ART’s System Demo shows a sharp increase in defects and frequent test failures. Teams warn that continuing at the current pace will likely miss PI objectives. As a SAFe leader, which action is most SAFe-aligned when quality issues threaten PI objectives?
Best answer: D
What this tests: SAFe Mindset, Values, and Principles
Explanation: In SAFe, Built-in Quality is a core value and is treated as non-negotiable because poor quality creates rework, slows flow, and jeopardizes delivery. When defects threaten PI objectives, the best response is to stop adding scope and address the root causes through immediate collaboration and improved engineering practices. This protects the ability to reliably meet PI commitments and sustain predictable delivery.
The SAFe Core Value of Built-in Quality means quality is designed and verified throughout development, not “tested in” at the end. When defect trends and failing tests indicate the system is destabilizing, continuing to add features typically increases rework and reduces the ART’s ability to achieve PI Objectives.
A SAFe-aligned response is to:
This is the most direct way to restore flow and execution predictability; transparency and alignment help, but they do not substitute for correcting systemic quality issues.
Built-in Quality is non-negotiable, so teams stop and address systemic quality issues before adding more scope.
Topic: SAFe Mindset, Values, and Principles
An ART’s lead time is increasing. A manager responds by tracking each specialist’s utilization, assigning extra work whenever someone is “idle,” and starting more features so every role stays near 100% busy.
Which concept best matches this approach?
Best answer: C
What this tests: SAFe Mindset, Values, and Principles
Explanation: This is utilization-focused optimization: maximizing resource efficiency by keeping people busy and starting more work. Lean thinking instead optimizes the whole system for flow of value, typically by limiting WIP and reducing queues and handoffs. The utilization approach often increases delays even when everyone appears “fully utilized.”
Traditional utilization-focused optimization measures success by how busy each person or role is and tries to avoid any idle time by starting more work. In knowledge work, that local optimization increases WIP, queues, multitasking, and handoffs—often making end-to-end lead time and predictability worse.
Lean thinking takes a system view and optimizes for fast, smooth flow of value to the customer. It emphasizes limiting WIP, reducing batch sizes and wait states, and improving the bottleneck so work finishes sooner instead of merely keeping everyone busy. The key distinction is optimizing resource efficiency (busy people) versus optimizing flow efficiency (faster value delivery).
It optimizes for keeping individuals busy (local efficiency) rather than improving end-to-end flow.
Topic: SAFe Mindset, Values, and Principles
An ART’s teams are delivering, but mainly when leaders add extra reporting, escalations, and end-of-PI bonuses tied to “percent complete.” The LACE wants a SAFe-aligned change that shifts motivation toward what works best for knowledge work.
Which action best increases intrinsic motivation (rather than extrinsic, compliance-driven motivation)?
Best answer: D
What this tests: SAFe Mindset, Values, and Principles
Explanation: Intrinsic motivation in knowledge work is strengthened by autonomy, mastery, and purpose, not by pressure or rewards. Shifting focus to meaningful outcomes and giving teams room to decide how to deliver them encourages engagement and ownership. This aligns with SAFe’s emphasis on empowered teams and decentralized decision-making.
In SAFe, knowledge workers are most effective when they are intrinsically motivated—driven by purpose, learning/mastery, and autonomy—rather than by extrinsic controls like bonuses, threats, and heavy reporting. A practical way to shift motivation is to make the “why” explicit (customer and business outcomes) and then empower the people closest to the work to decide “how” to meet agreed goals, such as PI Objectives.
Extrinsic, compliance-driven approaches can create local optimization (e.g., chasing points or “percent complete”), reduce transparency, and discourage problem solving. The key takeaway is to design the system of work to enable empowered, outcome-focused teams rather than to enforce compliance.
Purpose and autonomy support intrinsic motivation in knowledge work, aligning to SAFe’s Lean-Agile mindset and decentralized decision-making.
Topic: Exploring Lean Portfolio Management (LPM)
An organization has adopted Agile Release Trains, but portfolio results are deteriorating: PI Objectives are frequently missed, there are many partially funded initiatives in progress, lead time for epics is unpredictable, teams report rework due to shifting priorities, and different ARTs interpret “the top initiative” differently.
Which underlying cause is most likely, in SAFe terms?
Best answer: D
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: The portfolio Kanban exists to visualize, manage, and improve the flow of epics from idea through analysis, approval, implementation, and completion. When epics bypass that system (or it lacks explicit WIP limits and decision policies), the portfolio starts too much work, priorities shift, and alignment breaks down. The result is predictable: growing WIP, longer lead times, and reduced predictability for PI outcomes.
Portfolio Kanban is the mechanism in Lean Portfolio Management for governing and improving the flow of epics. By making work visible across defined states and applying explicit policies (including WIP limits), it helps the portfolio:
In the scenario, the combination of many partially funded initiatives, shifting priorities, and inconsistent interpretation across ARTs points to uncontrolled epic intake and weak alignment—classic signs that epics are not being managed through an effective portfolio Kanban system.
Without portfolio Kanban visibility and WIP limits, too many epics start, priorities churn, and intent/alignment remain unclear—reducing epic flow and predictability.
Topic: Adapting and Thriving with SAFe
Midway through a PI, an ART is experiencing growing queues and frequent handoffs. A leader decides to stop reviewing flow metrics (WIP, blocked work age, and cycle-time trends) and instead “steer the PI” using only last quarter’s NPS and end-of-PI escaped defect counts.
What is the most likely near-term impact of this decision?
Best answer: A
What this tests: Adapting and Thriving with SAFe
Explanation: WIP, blocked work age, and cycle-time trends are leading indicators that provide early warning and are actionable within the PI. Replacing them with lagging indicators like NPS and escaped defects shifts visibility to outcomes that arrive too late to steer day-to-day flow. The near-term consequence is slower detection and response to bottlenecks.
Leading indicators help you predict and influence outcomes while there’s still time to adjust. In flow, examples include WIP, aging work items, queue size, blocked work, and cycle-time trends—signals you can act on immediately to reduce delays and improve throughput. Lagging indicators (for example, NPS, revenue, escaped defects, or end-of-PI results) confirm what already happened and are valuable for learning, but they are not effective for mid-PI steering.
By steering only with lagging indicators, the ART loses timely feedback about where work is getting stuck, so bottlenecks and excessive WIP are likely to persist longer before corrective action is taken.
NPS and escaped defects are lagging indicators, so dropping leading flow signals delays detection and correction of bottlenecks.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio identifies a key value stream: “Quote-to-Bind” for commercial insurance. However, development is organized in functional departments (UX, API, database, test automation, security review). Most features wait in multiple queues and require hand-offs across 4–6 groups, causing long lead times and late defect discovery.
Which action best optimizes value delivery and flow while maintaining quality and alignment?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: The core mismatch is functional silos optimizing local efficiency while value flows end-to-end through the value stream. Creating an ART aligned to the value stream enables long-lived, cross-functional teams to deliver features with fewer hand-offs and shorter queues. Quality and alignment are maintained by built-in quality practices and agreed portfolio guardrails applied within the ART’s ways of working.
When organizational structure is functional, work repeatedly crosses boundaries, creating queues, delays, and feedback gaps that degrade flow and quality. In SAFe, improving value delivery starts by aligning delivery around the value stream (often via an ART) so that most work can be completed within a small number of stable, cross-functional teams.
This improves flow by:
Centralizing “end-of-line” activities or adding gates typically increases queues and delays, while sharing specialists across many efforts increases context switching and hidden WIP.
Aligning teams to the value stream reduces hand-offs and unmanaged queues while preserving built-in quality through shared Definition of Done and governance guardrails.
Topic: SAFe Mindset, Values, and Principles
Midway through a PI, an ART has several cross-team dependencies slipping and a few defects discovered during integration. Business stakeholders are anxious and ask for an updated forecast. Which action is NOT aligned with SAFe transparency to build trust and enable better decision-making on the ART?
Best answer: A
What this tests: SAFe Mindset, Values, and Principles
Explanation: On an ART, transparency means making actual progress, risks, and constraints visible so leaders and teams can make timely trade-offs and adjustments. Filtering status to appear “green” hides reality, which damages trust and typically worsens outcomes. The best decisions come from shared, objective information across the train.
Transparency is a SAFe core value that enables trust and better decision-making by creating a shared understanding of what is really happening on the ART. When dependencies slip and integration defects appear, the train needs objective signals (progress, quality, risks, constraints) to decide whether to re-plan, de-scope, swarm, or adjust priorities.
SAFe-aligned transparency practices include:
The key is to expose issues early while there is still time to respond, rather than managing perceptions.
Hiding real progress and problems undermines transparency, erodes trust, and delays decisions when course correction is still possible.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio team wants to use an AI assistant to draft an epic hypothesis statement for an initiative described only as: “Use AI to improve customer onboarding.” Before finalizing the epic hypothesis statement and validation approach, what should you obtain/verify first?
Best answer: A
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: An epic hypothesis statement is only useful if it is testable. Getting clear measurable outcomes, leading indicators, and current baselines lets you prompt AI to draft a hypothesis that can be validated with evidence and adjusted quickly. Without those measures, the statement becomes vague and cannot guide investment decisions or learning.
In Lean Portfolio Management, an epic hypothesis statement expresses a testable prediction about business value (what will improve and for whom) and the expected outcomes. Using AI to draft it is effective only when you provide validation-ready inputs: the measurable outcomes that define success, the leading indicators that show early progress, and the current baseline so improvement can be quantified. Those measures anchor the hypothesis, shape the MVP/experiments, and enable evidence-based decisions (pivot, persevere, or stop) as learning emerges. Tool selection, capacity, and detailed designs can follow, but they should not precede clarifying how value and learning will be measured.
You must define how success will be measured (outcomes, leading indicators, and baselines) before AI can draft a testable epic hypothesis statement.
Topic: SAFe Mindset, Values, and Principles
An ART is missing several iteration goals. In response, a functional manager starts assigning individual tasks to team members daily, requires pre-approval for design decisions, and tracks “hours utilized” to ensure everyone is busy. Morale drops and work-in-process grows.
What should the SAFe leader do to best restore intrinsic motivation while improving flow and value delivery?
Best answer: D
What this tests: SAFe Mindset, Values, and Principles
Explanation: Centralized task assignment, frequent approvals, and utilization targets reduce autonomy and suppress intrinsic motivation. In SAFe, leaders optimize outcomes by setting clear objectives, quality expectations, and decision-making guardrails, then empowering teams to self-organize. This reduces handoffs and approval queues, improving flow while staying aligned to PI/iteration goals.
This scenario shows management behaviors that suppress intrinsic motivation: micromanaging work (tasking individuals), adding approvals, and optimizing for utilization instead of flow. In SAFe, leaders get better performance by creating alignment and enabling autonomy—teams closest to the work should make day-to-day decisions within clear guardrails (quality policies, architectural runway, WIP limits, and shared PI/iteration objectives).
Practical correction:
The key is to increase autonomy while maintaining alignment and quality, which improves both motivation and delivery speed.
Autonomy within clear objectives and guardrails increases intrinsic motivation and reduces queues caused by centralized tasking and approvals.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio epic has been in the Portfolio Kanban Analysis state while Product Management and architects explore options. In the next portfolio sync, an executive asks for proof that the epic has moved beyond analysis and is now being implemented.
Which evidence best validates that the epic is in implementation (not just analysis)?
Best answer: C
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: Analysis produces decision-supporting artifacts (hypothesis, MVP, estimates, WSJF, and the Lean business case) to enable a go/no-go and sequencing decision. Implementation starts when the epic is approved and pulled into delivery, where it is executed by one or more ARTs. Evidence of implementation is therefore a delivery commitment and planned execution, such as PI Objectives tied to epic-derived features.
In SAFe, analyzing an epic is about reducing uncertainty so decision-makers can decide whether to proceed and when. Typical analysis outputs include the epic hypothesis statement, an MVP concept, sizing/estimates, WSJF, and a Lean business case—these support a portfolio-level go/no-go and sequencing decision.
Implementing an epic begins after that decision, when the epic is pulled into execution by the delivery value stream (often via one or more ARTs). At that point, epic work is decomposed into features (and then stories/enablers), planned in PI Planning, and tracked through objective evidence like PI Objectives, system demos, and flow/fitness metrics.
The strongest indicator of implementation is a PI-level delivery commitment tied to epic-derived work, not a planning artifact or meeting activity.
Committed PI Objectives on an ART show the epic has been approved, pulled, and is being executed in a PI.
Topic: Exploring Lean Portfolio Management (LPM)
A portfolio leader asks you to “create OKRs for the next PI” to improve alignment across multiple ARTs. You are not given the enterprise strategy details.
What should you obtain or clarify first before drafting the OKRs?
Best answer: C
What this tests: Exploring Lean Portfolio Management (LPM)
Explanation: OKRs start with the outcomes the enterprise is trying to achieve and how success will be measured. In SAFe, those desired outcomes should connect directly to strategic themes so that ART and team execution can align to strategy. Without the strategic themes and outcome measures, any OKRs risk becoming a list of outputs or activities.
OKRs are a lightweight way to express intent as outcomes (Objectives) and measurable results (Key Results). In SAFe portfolio context, they are most useful when they translate enterprise strategy into clear, measurable outcomes that reinforce the current strategic themes. Therefore, before writing OKRs for a PI, you first clarify which strategic themes are in play and what outcomes and measures matter (for example, customer behavior, time-to-market, quality, cost, or risk reduction). Once the “why” and the success measures are clear, teams and ARTs can derive aligned PI Objectives and choose the right work to achieve those results. The common failure mode is writing OKRs as delivery checklists rather than outcome measures tied to strategy.
OKRs are meant to express measurable outcomes that align execution to strategic themes, so you must know the intended strategic direction and outcomes first.
Topic: SAFe Mindset, Values, and Principles
An ART used an AI assistant to summarize pre-PI planning notes and auto-draft team PI Objectives, dependencies, and risks. After PI Planning, the train shows missed PI Objectives, growing WIP, poor predictability, and significant rework because teams built against the wrong assumptions. In the debrief, teams report they “trusted the AI summary,” did not trace items back to source notes, and didn’t disclose where AI was used.
What is the most likely underlying cause?
Best answer: A
What this tests: SAFe Mindset, Values, and Principles
Explanation: The key clue is that the train treated AI-created objectives and dependencies as authoritative without validating them against trusted sources or disclosing AI usage. That invites hallucinations and biased or incomplete summaries to become “the plan,” which drives misalignment, rework, and unpredictable outcomes. A practical mitigation is to apply built-in quality to AI-assisted planning: validate outputs, sanitize inputs, and make AI usage transparent.
The underlying issue is not the symptoms (missed objectives, high WIP, poor predictability), but the quality and alignment of the intent feeding execution. When AI is used to generate or summarize PI Objectives, dependencies, and risks, it can hallucinate, omit critical details, or introduce bias. If the ART does not validate AI outputs against source artifacts and does not disclose where AI contributed, incorrect assumptions propagate into plans and commitments, leading to misalignment and rework.
Practical mitigations consistent with SAFe ways of working include:
The takeaway is to treat AI assistance as a hypothesis generator, not a source of truth, and apply built-in quality to planning artifacts.
Relying on AI outputs without validation or transparency allowed incorrect assumptions to drive plans, creating misalignment and rework.
Use the SAFe Agilist Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the SAFe Agilist guide on PMExams.com for concept review, then return here for PM Mastery practice.