Try 10 focused AWS SAA-C03 questions on Design Resilient Architectures, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Field | Detail |
|---|---|
| Exam route | AWS SAA-C03 |
| Topic area | Design Resilient Architectures |
| Blueprint weight | 26% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Design Resilient Architectures for AWS SAA-C03. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 26% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Design Resilient Architectures
Which of the following statements about using Amazon RDS read replicas for scalability and resilience are true? (Select THREE.)
Options:
A. Because replication to read replicas is asynchronous, applications that require strongly consistent reads should continue to read from the primary instance.
B. Read replicas are primarily used to offload read traffic from the primary DB instance, improving scalability for read-heavy workloads.
C. Adding multiple read replicas automatically increases write throughput on the primary because write operations can be redirected to the replicas.
D. Read replicas accept both read and write traffic, and any changes made on a replica are replicated back to the primary instance.
E. In Amazon RDS (non-Aurora), read replicas do not provide automatic failover; Multi-AZ deployments are used for automatic failover instead.
F. Using read replicas guarantees that all queries see the most recent committed data because replication uses synchronous updates by default.
Correct answers: A, B and E
Explanation: Amazon RDS read replicas are designed to improve scalability and performance for read-heavy workloads by offloading read traffic from the primary database instance. They use asynchronous replication, which means updates are sent from the primary to the replicas with some delay. This architecture provides eventual consistency on the replicas: they will converge to the primary’s state, but at any given moment they may be slightly behind.
Because of this eventual consistency, read replicas are ideal for use cases where slightly stale data is acceptable, such as reporting, analytics, or non-critical user-facing reads. For operations that require the latest committed data (for example, transactional checks immediately after a write), applications should continue to read from the primary.
In standard Amazon RDS (non-Aurora), read replicas are not an automatic high-availability mechanism. They do not automatically take over if the primary fails. Instead, Multi-AZ deployments are used to provide automatic failover within a Region. A read replica can be manually promoted to become a standalone primary in a disaster recovery scenario, but that requires explicit action and typically some downtime.
Read replicas are read-only under normal conditions and cannot be used to increase write capacity on the primary. All writes still target the primary instance, which then replicates changes out to the replicas.
Topic: Design Resilient Architectures
Which THREE statements about using Amazon EventBridge to design loosely coupled, event-driven architectures are correct? (Select THREE.)
Options:
A. Each EventBridge rule can forward matching events to only one target, so sending events to multiple downstream services requires duplicating the events onto multiple buses.
B. EventBridge rules evaluate each incoming event against an event pattern and can filter on specific fields, including nested JSON attributes, before forwarding the event to targets.
C. EventBridge guarantees strict FIFO ordering and exactly-once delivery for all events on a bus, regardless of source or target.
D. Amazon EventBridge can receive events directly from supported SaaS partner applications onto a partner event bus in your account without requiring you to poll the SaaS system.
E. EventBridge provides loose coupling by allowing event producers and consumers to remain unaware of each other; adding new consumers typically does not require changes to event producers.
F. To send events from a custom on-premises application to EventBridge, you must first enqueue them in Amazon SQS and configure EventBridge to poll the queue.
Correct answers: B, D and E
Explanation: Amazon EventBridge is a serverless event bus service that simplifies building event-driven, loosely coupled architectures. Producers send events to an event bus without knowing which consumers will process them. Rules on the bus inspect each event’s content using event patterns and, when there is a match, route those events to one or more targets such as Lambda functions, Step Functions, Kinesis streams, or other AWS services.
EventBridge supports three main categories of event buses: the default event bus for AWS service events and custom events in an account, partner event buses for supported SaaS applications that push events into your account, and custom event buses you create for your own applications or organizational separation. This makes it easy to route and normalize events from SaaS apps, AWS services, and custom apps using a common, centralized mechanism.
While EventBridge provides high availability and at-least-once delivery, it does not offer strict FIFO ordering or exactly-once semantics. For workloads that need strong ordering guarantees or transactional semantics, you typically add complementary services or design patterns (such as idempotent handlers, deduplication keys, or state machines) on top of EventBridge.
Overall, EventBridge’s content-based filtering and multi-target rules are central to enabling fan-out and bus-level decoupling without tight integration between producers and consumers.
Topic: Design Resilient Architectures
A company defines its entire multi-tier application infrastructure using AWS CloudFormation templates. During a Regional failover, the same templates are used to provision an identical stack in a secondary Region, ensuring consistent networking, security groups, and database configuration. Which AWS Well-Architected pillar is most directly supported by this approach?
Options:
A. Security
B. Performance Efficiency
C. Reliability
D. Cost Optimization
Best answer: C
Explanation: The scenario describes using AWS CloudFormation to define the entire application stack and then redeploy it in a secondary Region during failover. This use of infrastructure as code ensures that the failover environment is consistent with the primary environment, reducing configuration drift and human error.
This pattern directly supports the Reliability pillar of the AWS Well-Architected Framework. Reliability emphasizes the ability of a workload to perform its intended function correctly and consistently, including under failure conditions. Key design principles include test recovery procedures, automatically recover from failure, and scale horizontally to increase aggregate workload availability. Automating infrastructure deployment across Regions with IaC is a core way to meet these goals.
While other pillars can benefit indirectly from IaC, the central benefit in this question is predictable, automated recovery and consistent infrastructure during Regional failover, which is the essence of the Reliability pillar.
Topic: Design Resilient Architectures
Which AWS service should a solutions architect use to provide automatic traffic failover between application endpoints in multiple AWS Regions while clients continue to use the same static anycast IP addresses?
Options:
A. Application Load Balancer (ALB)
B. Amazon CloudFront
C. Amazon Route 53 latency-based routing
D. AWS Global Accelerator
Best answer: D
Explanation: AWS Global Accelerator is a networking service that provides two static anycast IP addresses that serve as a fixed entry point to your application. These IPs are advertised from AWS edge locations around the world. Global Accelerator monitors the health of configured Regional endpoints (such as Network Load Balancers, Application Load Balancers, or EC2 instances) and automatically directs user traffic to the closest healthy endpoint.
Because the client always connects to the same static anycast IP addresses, failover between Regions happens without clients having to resolve new DNS records or change IPs. This makes Global Accelerator well-suited for highly available, fault-tolerant architectures that require fast, automatic failover across Regions while keeping a stable client-facing IP.
Route 53, ALB, and CloudFront all help with availability and performance but do not provide this specific combination of global anycast static IPs plus automatic cross-Region failover at the network layer.
Topic: Design Resilient Architectures
A company runs an e-commerce application on Amazon RDS for MySQL in a Multi-AZ deployment. The workload is read-heavy, so the company has added several RDS read replicas in the same Region to improve scalability. The application must always show strongly consistent data for checkout and inventory operations, and the company relies on RDS Multi-AZ for automatic failover. Which of the following approaches using the read replicas should the solutions architect AVOID? (Select TWO.)
Options:
A. Rely on the read replicas for automatic failover by having the application randomly select between the primary endpoint and replica endpoints for all database connections.
B. Run periodic reporting and analytics jobs against the read replicas, accepting that results may be a few seconds behind the primary.
C. Keep any operations that must immediately read their own recent writes, such as displaying an order confirmation, pointed at the primary instance instead of the read replicas.
D. Configure the application to send some write traffic directly to the read replicas to horizontally scale write throughput.
E. Route product catalog and search queries to the read replicas while keeping checkout and inventory queries on the primary instance.
Correct answers: A and D
Explanation: Amazon RDS read replicas are designed to offload read traffic from the primary instance. They use asynchronous replication, so they provide eventual consistency rather than guaranteed read-after-write consistency.
For workloads that can tolerate slightly stale data (for example, product catalog browsing or reporting), routing queries to read replicas improves scalability and reduces load on the primary. However, reads that must immediately reflect the most recent writes (such as checkout or inventory updates) should continue to use the primary.
High availability and automatic failover within a Region are provided by RDS Multi-AZ deployments through the single primary endpoint, not by randomly sending traffic to read replicas. Read replicas can be promoted manually for DR, but they are not a substitute for Multi-AZ automatic failover.
The unsafe patterns are sending write traffic to read replicas (which are read-only) and trying to use read replicas as an automatic failover mechanism by distributing all traffic across primary and replicas.
Topic: Design Resilient Architectures
An ecommerce company runs a multi-AZ web application in us-east-1 and plans to add us-west-2 as a warm standby Region. The company wants automated, repeatable infrastructure deployments with minimal configuration drift during failover events. Which solutions meet these requirements? (Select THREE.)
Options:
A. Store application infrastructure definitions in nested AWS CloudFormation templates in AWS CodeCommit and configure a CI/CD pipeline that deploys the same templates to us-east-1 and us-west-2 on every approved change.
B. Create the full stack manually in us-east-1, then use console wizards to copy Amazon Machine Images and database snapshots to us-west-2 and launch resources there during disaster-recovery drills following a documented runbook.
C. Place a single AWS CloudFormation template for the application in a version-controlled repository and use CloudFormation StackSets with service-managed permissions to automatically deploy and update stacks in both Regions.
D. Use custom AMIs and EC2 user data scripts to create all resources on demand in each Region and rely on operations staff to start instances and configure networking when a failover is required.
E. Define the entire application stack as AWS CDK constructs and use an AWS CodePipeline with cross-Region actions to synthesize and deploy CloudFormation stacks to both us-east-1 and us-west-2 from the same source repository.
Correct answers: A, C and E
Explanation: To ensure reliable, repeatable infrastructure during cross-Region failover, the key is to manage all infrastructure as code and deploy it through automated pipelines to every Region where the application runs. Services like AWS CloudFormation, AWS CDK, and CloudFormation StackSets provide declarative templates that can be version-controlled and promoted via CI/CD. When both Regions are always updated from the same templates, configuration drift is minimized and failover mainly becomes a routing or data failover decision, not a last-minute provisioning exercise.
Approaches that depend on manual console actions, ad hoc runbooks, or one-off resource copies may work occasionally but are not resilient. They increase the chance of missing dependencies, misconfiguring networking or IAM, and discovering issues only during a real failover event. Using IaC plus automation addresses both the reliability and operational excellence pillars of the AWS Well-Architected Framework.
Topic: Design Resilient Architectures
A company runs several stateless microservices on Amazon ECS using Fargate tasks in two Availability Zones within a single VPC. Each service currently exposes its own public IP address, making deployment and failover difficult to manage. The company needs a single HTTPS endpoint that can route requests to different services based on URL paths. The solution must automatically stop sending traffic to unhealthy tasks, distribute traffic across both Availability Zones, and minimize operational overhead. Which architecture should a solutions architect recommend?
Options:
A. Create a Network Load Balancer with TCP listeners and register the Fargate tasks as targets in only one Availability Zone; use Amazon Route 53 weighted records to distribute traffic across services.
B. Deploy an Application Load Balancer in a single subnet and register only the tasks in that subnet as targets, relying on ECS Service Auto Scaling to replace failed tasks.
C. Expose each Fargate service through its own public Network Load Balancer and configure Amazon Route 53 failover routing policies between them.
D. Create an internet-facing Application Load Balancer with an HTTPS listener, configure path-based routing rules to multiple target groups that include Fargate tasks in both Availability Zones, and enable health checks on each target group.
Best answer: D
Explanation: Using an internet-facing Application Load Balancer with cross-AZ target groups, health checks, and path-based routing best meets the single-endpoint, high-availability, and low-operations requirements.
Topic: Design Resilient Architectures
A solutions architect recommends adding Amazon CloudFront in front of an application and using Amazon ElastiCache to store frequently accessed data, with the goal of reducing request latency and improving the system’s ability to handle traffic spikes. Which AWS Well-Architected Framework pillar does this recommendation primarily address?
Options:
A. Performance Efficiency
B. Cost Optimization
C. Reliability
D. Security
Best answer: A
Explanation: The scenario describes adding Amazon CloudFront as a content delivery network in front of the application and using Amazon ElastiCache to store frequently accessed data. Both actions are classic caching strategies that reduce round trips to the origin and database, lower end-to-end latency, and enable the system to serve more requests without proportionally increasing back-end capacity.
These outcomes—improving responsiveness under variable load and using compute resources more efficiently—are key goals of the Performance Efficiency pillar in the AWS Well-Architected Framework. CloudFront caches content at edge locations close to users, and ElastiCache offloads repeated read traffic from databases or application servers. Together, they improve performance and scalability.
Other pillars such as Cost Optimization and Reliability can benefit indirectly from caching, but they are not the primary focus of the design change described in the question, which explicitly emphasizes latency reduction and handling traffic spikes.
Topic: Design Resilient Architectures
A company runs a multi-tier web application in one AWS Region using Amazon RDS for MySQL. They require cross-Region disaster recovery (DR) with RPO 1 hour, RTO 24 hours, and must minimize ongoing DR cost by avoiding always-on application servers in the DR Region. Which TWO DR approaches should the solutions architect AVOID recommending? (Select TWO.)
Options:
A. Maintain a minimal pilot light in the DR Region with a small RDS instance receiving cross-Region replication and no EC2 application servers running; on failover, scale up instances via Auto Scaling and update DNS.
B. Deploy an active-active architecture with fully scaled stacks in both Regions and use Amazon Route 53 latency-based routing to distribute user traffic across them in normal operation.
C. Run a warm standby in the DR Region with a smaller fleet of EC2 application servers behind an Application Load Balancer and a cross-Region RDS read replica that can be promoted during a disaster.
D. Configure RDS automated backups with frequent log shipping to a secondary Region, enable S3 Replication for static content, and store application AMIs and user data scripts to launch EC2 instances on demand during recovery.
E. Use AWS Backup to take hourly RDS snapshots with cross-Region copy, enable S3 Cross-Region Replication, and use AWS CloudFormation to recreate the application stack in the DR Region after an outage.
Correct answers: B and C
Explanation: The scenario requires cross-Region disaster recovery with an RPO of 1 hour and an RTO of 24 hours, while explicitly minimizing ongoing cost by avoiding always-on application servers in the DR Region. This points toward lower-cost DR strategies such as backup-and-restore or pilot light, rather than warm standby or active-active.
Backup-and-restore patterns keep only data copies and infrastructure-as-code templates in the DR Region. They incur low ongoing cost because no application compute runs until a disaster, but they still can meet a 1-hour RPO if backups or log shipping are frequent enough. A 24-hour RTO is relatively relaxed and typically achievable by recreating infrastructure and restoring data.
A pilot light pattern keeps a minimal core (often the database and networking) running in the DR Region, with application servers launched and scaled only during failover. This costs more than pure backup-and-restore but much less than warm standby or active-active, and it meets the requirement of no continuously running application servers.
Warm standby and multi-site active-active both keep EC2 application servers running in the DR Region at all times. That directly violates the requirement to avoid always-on application servers and minimize DR cost. Even though they improve RTO, they are overkill and non-compliant with the stated constraints, so they are the approaches that must be avoided.
Topic: Design Resilient Architectures
Which TWO statements about applying caching strategies on AWS to improve scalability and reduce latency are true? (Select TWO.)
Options:
A. Designing Amazon API Gateway cache keys to include only the request parameters that change the response helps increase cache hit rate and reduce backend load.
B. Setting very long time-to-live (TTL) values for all cached data is always recommended because it guarantees the lowest latency, regardless of how often the underlying data changes.
C. Amazon ElastiCache for Redis is primarily intended as a durable data store and can generally replace Amazon RDS for relational workloads without additional considerations.
D. Placing Amazon CloudFront in front of an Amazon S3 static website reduces latency for global users and lowers direct request load on S3 by caching objects at edge locations.
E. To invalidate outdated objects in an Amazon CloudFront distribution, you must create a new distribution; cached objects cannot be removed before their TTL expires.
Correct answers: A and D
Explanation: Caching on AWS is used to improve performance and scalability by serving repeated requests from faster, closer, and less expensive layers instead of always hitting the origin service or database.
Amazon CloudFront caches content (such as objects in S3 or responses from an HTTP origin) at edge locations. This reduces latency for users worldwide and offloads traffic from the origin. ElastiCache provides an in-memory data store or cache in front of databases or services to reduce query latency and offload reads. API Gateway can cache method responses based on configurable cache keys, which can significantly reduce the number of calls to downstream integrations.
Key design considerations include when to introduce a cache (for example, heavy read traffic or repeated requests), how to define cache keys (only include dimensions that affect the response), and how to manage invalidation and TTLs so that performance gains do not come at the cost of unacceptable staleness.
Use the AWS SAA-C03 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AWS SAA-C03 on Web View AWS SAA-C03 Practice Test
Read the AWS SAA-C03 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.