Prepare for AWS Certified Solutions Architect Associate (SAA-C03) with free sample questions, a full-length diagnostic, topic drills, timed practice, secure, resilient, high-performing, and cost-optimized architecture scenarios, and detailed explanations in IT Mastery.
SAA-C03 is AWS’s Solutions Architect Associate certification for candidates who need to design secure, resilient, high-performing, and cost-aware AWS architectures. If you are searching for SAA-C03 sample questions, SAA-C03 practice exam questions, an AWS Solutions Architect Associate practice test, mock exam, or exam simulator, this is the main IT Mastery page to start on web and continue on iOS or Android with the same IT Mastery account.
Start a practice session for AWS Certified Solutions Architect - Associate (SAA-C03) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.
Prefer to practice on your phone or tablet? Download the IT Mastery – AWS, Azure, GCP & CompTIA exam prep app for iOS or IT Mastery app on Google Play (Android) and use the same IT Mastery account across web and mobile.
Free diagnostic: Try the 65-question AWS SAA-C03 full-length practice exam before subscribing. Use it as one architecture baseline, then return to IT Mastery for timed mocks, domain drills, explanations, and the full Solutions Architect Associate question bank.
SAA-C03 questions usually reward the option that satisfies the stated requirement with the cleanest architectural fit, lowest operational burden, and strongest alignment to AWS defaults and service constraints.
| Domain | Weight |
|---|---|
| Design Secure Architectures | 30% |
| Design Resilient Architectures | 26% |
| Design High-Performing Architectures | 24% |
| Design Cost-Optimized Architectures | 20% |
Use these filters before choosing between two plausible architecture answers:
| Area | What strong readiness looks like |
|---|---|
| Secure architectures | You can choose identity, network, encryption, logging, and data-protection controls that match the stated risk. |
| Resilient architectures | You can reason through failure domains, decoupling, backups, replication, failover, and recovery objectives. |
| High-performing architectures | You can match compute, storage, database, caching, and integration services to throughput and latency needs. |
| Cost-optimized architectures | You can reduce waste without violating availability, durability, performance, or compliance requirements. |
| Service-selection trade-offs | You can explain why the best AWS-native managed option beats a more manual or overbuilt alternative. |
| Day | Practice focus |
|---|---|
| 7 | Take the free full-length diagnostic and tag misses by domain and AWS service family. |
| 6 | Drill IAM, VPC, encryption, private access, logging, and secure data-access scenarios. |
| 5 | Drill resilience: Multi-AZ, replication, backup, queueing, decoupling, and failover decisions. |
| 4 | Drill performance and cost trade-offs across compute, storage, databases, caching, and networking. |
| 3 | Complete a timed mixed set and review every architecture trade-off, not just the final answer. |
| 2 | Revisit weak service pairs such as SQS vs SNS, EFS vs FSx, RDS vs DynamoDB, and CloudFront vs Global Accelerator. |
| 1 | Do a light review of patterns and constraints; avoid trying to memorize new edge cases. |
If you can complete several unseen mixed attempts above roughly 75% and explain the architecture trade-off behind each correct answer, it is usually better to take the exam than keep drilling until answers feel memorized. The goal is to recognize AWS design patterns under pressure, not to memorize a large bank of stems.
Use these child pages when you want focused IT Mastery practice before returning to mixed sets and timed mocks.
Need concept review first? Read the AWS SAA-C03 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks, topic drills, and full IT Mastery practice.
These are original IT Mastery practice questions aligned to SAA-C03 architecture, resilience, security, performance, cost optimization, migration, and AWS service-selection decisions. They are not AWS exam questions and are not copied from any exam sponsor. Use them to check readiness here, then continue in IT Mastery with mixed sets, topic drills, and timed mocks.
Topic: Domain 4: Design Cost-Optimized Architectures
Which of the following statements about placing NAT gateways in public subnets, and the impact on redundancy and cost, are true? (Select TWO.)
Options:
Correct answers: D and E
Explanation: The statement about using a single NAT gateway for multiple Availability Zones causing cross-AZ data transfer is correct because any traffic from an instance to a NAT gateway in another Availability Zone must cross AZ boundaries and incur cross-AZ charges.
The statement about deploying one NAT gateway per Availability Zone and routing each private subnet to the NAT gateway in the same Availability Zone is also correct. This design keeps egress traffic within each Availability Zone (avoiding cross-AZ transfer for that path) and ensures that if one Availability Zone fails, instances in the other AZs still have their own NAT gateways, improving resiliency.
Topic: Domain 4: Design Cost-Optimized Architectures
A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The following CloudWatch metrics summarize a typical weekday.
Based only on this data, which action should the solutions architect take to reduce compute costs while maintaining performance?
| Time (UTC) | Avg requests/min per instance | Avg CPU per instance | ASG desired capacity |
|---|---|---|---|
| 01:00 | 120 | 6% | 6 |
| 09:00 | 1,800 | 45% | 6 |
| 13:00 | 2,100 | 70% | 6 |
| 20:00 | 900 | 30% | 6 |
Options:
Best answer: B
Explanation: Configuring scheduled scaling on the Auto Scaling group aligns capacity with predictable demand. The key data is the 01:00 row, which shows an Avg CPU per instance of 6% while the ASG desired capacity remains 6. This clearly indicates that instances are mostly idle overnight. By lowering the minimum and desired capacity during these hours and restoring them before business hours, the company can stop paying for unnecessary instances while still satisfying the higher utilization periods at 09:00 and 13:00.
Topic: Domain 1: Design Secure Architectures
Your company owns an S3 bucket in its production account. An external vendor uses its own AWS account and needs read-only access to that single bucket for 3 months. Access must be limited to IAM principals in that vendor account only. Which THREE approaches should you AVOID? (Select THREE.)
Options:
s3:GetObject and s3:ListBucket on the bucket ARN, without changing the bucket policy.Principal: "*" allowing s3:GetObject and s3:ListBucket, and disable S3 Block Public Access for the bucket so the vendor can read objects.s3:GetObject and s3:ListBucket to a specific IAM role ARN from the vendor account; have the vendor assume that role in its own account.s3:GetObject and s3:ListBucket on the bucket ARN, without modifying the bucket policy or creating any roles.s3:GetObject and s3:ListBucket on only that bucket, and have the vendor assume this role.Correct answers: A, B, and D
Explanation: The three approaches to avoid all misuse the policy types relative to the requirement:
The choice that only attaches an IAM identity-based policy in the vendor account relies solely on the vendor’s identity policy. Because the bucket is in a different account and no resource-based policy or trusted role exists in your account, S3’s default cross-account deny applies, so the vendor still cannot access the bucket.
The choice that uses a bucket policy with Principal: "*" and disables S3 Block Public Access exposes the bucket publicly to the internet. This completely violates the requirement that access be limited to IAM principals in the vendor account only; it’s a clear security anti-pattern.
The choice that attaches an SCP to the vendor account and tries to “allow” S3 access misunderstands SCP behavior. SCPs never grant permissions; they just constrain what identity and resource policies can do. Without matching identity-based or resource-based permissions, the vendor will not obtain access to the bucket, so the requirement is not met.
Topic: Domain 4: Design Cost-Optimized Architectures
A company must store 200 TB of application log files for regulatory compliance for 10 years. The logs are rarely accessed, and auditors can wait up to 12 hours to retrieve data. The data must be highly durable, and minimizing monthly storage cost is the priority. Which Amazon S3 storage class is the most appropriate choice?
Options:
Best answer: C
Explanation: The option that uses Amazon S3 Glacier Deep Archive is correct because it aligns with all key requirements:
This combination of ultra-low cost and acceptable hours-level retrieval latency is the discriminating factor that makes S3 Glacier Deep Archive the best choice compared to other S3 classes listed.
Topic: Domain 2: Design Resilient Architectures
Which THREE statements about designing a scalable and secure three-tier web application architecture on AWS are correct? (Select THREE.)
Options:
Correct answers: A, C, and E
Explanation: The statement about placing web servers in public subnets behind an ALB with app and database tiers in private subnets is correct because it reflects the canonical three-tier pattern: only the ALB is internet-facing, while compute and data layers are isolated.
The statement describing separate security groups for each tier is correct because it enforces least-privilege, tier-to-tier access (for example, ALB to web, web to app, app to database) and implements defense-in-depth around the data layer.
The statement about designing tiers to be stateless where possible and using Auto Scaling groups and managed database scaling mechanisms is correct because it captures the key scalability benefit of multi-tier designs: each layer can scale independently (out or up) without redesigning the entire stack.
Topic: Domain 4: Design Cost-Optimized Architectures
A company has 1,000 IoT sensors, each sending 1GB of data per day to an application in one AWS Region. Data transfer into the Region costs $0.09/GB. Using AWS IoT Greengrass, only 25% of data will be sent. What is the reduction in data transfer charges over 30 days, in dollars? Round to the nearest dollar.
Options:
Best answer: B
Explanation: The choice with a $2,025 reduction correctly applies the 75% savings to the original monthly cost. The original cost is 30,000GB × $0.09 = $2,700. Since edge processing causes only 25% of data to be sent, 75% of the transfer (and cost) is removed: $2,700 × 0.75 = $2,025 reduction.
Topic: Domain 3: Design High-Performing Architectures
Which TWO statements are true about designing Amazon QuickSight dashboards to surface operational metrics effectively for different stakeholders? (Select TWO.)
Options:
Correct answers: B and D
Explanation: The statement about using row-level security to publish one dashboard that shows different subsets of data to different users is correct because QuickSight supports row-level security mappings that control which records each user or group can see. This directly aligns with the goal of providing stakeholder-specific operational views without duplicating datasets or dashboards.
The statement about using SPICE for frequently queried operational data is also correct. SPICE stores data in-memory and is optimized for fast, concurrent access. For heavily used operational dashboards, this significantly improves user experience and reduces load on the underlying operational databases or log stores, which is a key aspect of a high-performing visualization solution.
Topic: Domain 3: Design High-Performing Architectures
Which TWO of the following statements about Amazon Kinesis Data Streams ordering and scaling behavior are true? (Select TWO.)
Options:
Correct answers: C and D
Explanation: The statement that records with the same partition key are written to the same shard and can be processed in strict order is correct because the partition key controls shard assignment, and Kinesis guarantees ordering within a shard via sequence numbers. This is how you maintain ordered processing for related events.
The statement that provisioned-capacity streams require manual resharding to change throughput is also correct. In this mode, Kinesis does not automatically scale shard count when traffic increases; you must explicitly split or merge shards to increase or decrease capacity. This aligns with how throughput is provisioned and managed in Kinesis Data Streams.
Topic: Domain 3: Design High-Performing Architectures
Which of the following statements about Amazon MSK and Amazon Kinesis Data Streams is INCORRECT?
Options:
Best answer: D
Explanation: The incorrect choice states that Amazon MSK is generally simpler to operate than Amazon Kinesis Data Streams because AWS fully abstracts cluster capacity and scaling for MSK.
In reality, Kinesis Data Streams is usually the simpler, lower-operations option. AWS completely manages the service, and especially with on-demand capacity mode, users do not have to plan detailed capacity beyond basic limits. By contrast, with Amazon MSK, you still decide broker instance types, storage, and topic partitioning, and you may need to manage scaling events.
Because this choice reverses AWS guidance about which service has lower operational overhead, it is the one incorrect statement.
Topic: Domain 1: Design Secure Architectures
A company has separate security, dev, and prod AWS accounts. Requirements:
Which IAM access design meets these requirements? (Select THREE.)
Options:
Correct answers: B, D, and E
Explanation: - ✔ Using IAM Identity Center groups mapped to permission sets and roles: This design cleanly maps human users into groups, then into permission sets that create IAM roles like SecurityAudit and AppAdmin. Security engineers get read-only log roles in each account, app teams get per-account admin roles, and a separate BreakGlassAdmin role exists per account. Customer-managed policies in permission sets minimize duplication and keep responsibilities separated.
✔ Federating directly from a corporate IdP into IAM roles with shared policies: Here, IdP groups map to IAM roles such as SecurityLogsReadOnly and AccountAdmin. A shared customer-managed policy handles log-read access across accounts, reducing duplication. Each app team’s group maps only to admin roles in its own account. A dedicated BreakGlassAdmin role per account with MFA and strict procedures provides the required emergency path.
✔ Centralizing logs and using cross-account log roles plus per-account admin and break-glass roles: Security engineers assume a LogsReadOnly role in the security account, which in turn assumes a LogsCollect role in workload accounts based on a shared log-read policy. App teams use per-account ApplicationAdmin roles, and each account has an explicit BreakGlassAdmin role with AdministratorAccess and mandatory MFA. This preserves least privilege, clear duty separation, and reusable policies.
Topic: Domain 3: Design High-Performing Architectures
A company must ingest telemetry from 50,000 IoT devices, up to 200,000 events per second. They need strict ordering per device and sub-second processing for alerts. The solution should be highly scalable and use managed streaming services. Which architectures should the solutions architect AVOID? (Select THREE.)
Options:
Correct answers: D, E, and F
Explanation: The architectures that should be avoided each violate a core requirement:
Topic: Domain 2: Design Resilient Architectures
An online learning platform runs identical HTTPS web stacks in three AWS Regions, each behind a regional Application Load Balancer. The company wants users automatically routed to the lowest-latency healthy Region. Caching and static anycast IP addresses are not required. Which AWS services are appropriate choices to meet these requirements? (Select TWO.)
Options:
Correct answers: D and E
Explanation: - ✔ Amazon Route 53 latency-based routing records pointing to the regional ALBs: Route 53 can create latency-based DNS records for each Region’s ALB. For each client DNS query, Route 53 returns the ALB in the Region that offers the best latency, and it can use health checks to exclude unhealthy endpoints. This directly matches the requirement to route users to the lowest-latency healthy Region without involving caching.
Topic: Domain 4: Design Cost-Optimized Architectures
Which THREE of the following statements about cost and connectivity trade-offs between AWS Site-to-Site VPN, AWS Direct Connect, and internet-based access are INCORRECT? (Select THREE.)
Options:
Correct answers: B, C, and D
Explanation: The incorrect choices all misstate key trade-offs:
Topic: Domain 1: Design Secure Architectures
Which of the following statements about designing IAM and key policies for AWS KMS customer managed keys are INCORRECT and should NOT be followed as best practices? (Select THREE.)
Options:
kms:* permissions on all keys in the account is recommended to avoid accidental access denial.Correct answers: A, D, and E
Explanation: The statements that treat broad or risky permissions as best practice are incorrect:
kms:* on all keys is also incorrect. That level of access grants full administrative and usage rights across every key, far beyond what is needed, and directly conflicts with least‑privilege design.These three statements misrepresent KMS best practices and would introduce significant security and availability risks if followed.
Topic: Domain 3: Design High-Performing Architectures
An online retailer collects clickstream events from web clients at 20,000 events/second, with spikes up to 100,000 events/second. Two independent consumer applications must process each event within 5 seconds and be able to replay historical data. Which solution is MOST appropriate?
Options:
Best answer: D
Explanation: Using Amazon Kinesis Data Streams in on-demand capacity mode with separate consumers for each application best matches all requirements:
No other option simultaneously delivers high-throughput ingestion, low-latency processing, independent multi-consumer access, and straightforward replay.
Topic: Domain 1: Design Secure Architectures
A company runs many workloads in dozens of AWS accounts in a single AWS Organizations organization. Each account has one or more VPCs. The security team must:
Which THREE approaches meet these requirements using AWS managed services? (Select THREE.)
Options:
Correct answers: C, D, and E
Explanation: The correct answers all combine AWS Network Firewall with AWS Firewall Manager in patterns that satisfy every requirement:
Topic: Domain 2: Design Resilient Architectures
Which TWO statements below about correctly implementing immutable infrastructure patterns on AWS are true? (Select TWO.)
Options:
Correct answers: A and C
Explanation: The statement about replacing servers with new instances built from versioned golden images captures the core immutable idea: no in-place changes; all updates come from a new image version.
The statement about rollback via redeploying the previous image version describes the typical rollback strategy in immutable systems: switch traffic back to an earlier, stable image by redeploying it (for example, using blue/green or rolling strategies), which is faster and more consistent than restoring entire servers from backup snapshots.
Topic: Domain 3: Design High-Performing Architectures
Which statement correctly describes the effect of adding an Amazon ElastiCache cluster between an application and its Amazon RDS database to reduce latency?
Options:
Best answer: C
Explanation: The choice describing that ElastiCache reduces read latency by serving data from memory while requiring the application to handle possible stale data is accurate. ElastiCache is an in-memory, network-attached cache that accelerates reads, but it does not enforce strong synchronization with RDS. Consistency behavior is determined by how the application populates and invalidates cache entries, not by ElastiCache itself.
Topic: Domain 2: Design Resilient Architectures
A company has an image-processing pipeline where Amazon S3 sends events to an Amazon Kinesis Data Stream that is processed by AWS Lambda. Traffic is usually low but occasionally spikes. Ordering is not required. The team wants to reduce cost and operational overhead without reducing reliability. Which modification is MOST appropriate?
Options:
Best answer: B
Explanation: Replacing Kinesis Data Streams with an SQS standard queue configured as a Lambda event source mapping is best because it preserves asynchronous decoupling and at-least-once delivery while eliminating shard provisioning and resharding. SQS charges per request instead of per shard, typically reducing cost for low-traffic workloads, and the managed polling and batching from Lambda simplify operations and handle traffic spikes automatically.
Topic: Domain 4: Design Cost-Optimized Architectures
A company migrated a read-heavy web application to EC2 with gp3 EBS volumes. Last month, AWS Cost Explorer shows a large spike in EBS gp3 charges. CloudWatch shows volume I/O never exceeds 800 IOPS and 60MiB/s; application latency is within SLOs. What is the most cost-effective fix?
Options:
Best answer: B
Explanation: The option that reconfigures the gp3 volumes to use only the baseline 3,000 IOPS and 125MiB/s directly targets the root cause: overprovisioned performance. Since actual usage peaks at 800 IOPS and 60MiB/s, the baseline is more than sufficient. Reducing provisioned IOPS/throughput to the included baseline removes unnecessary premium charges without impacting performance or changing the architecture.
Topic: Domain 2: Design Resilient Architectures
A company runs a public web application in two AWS Regions. Each Region has instances in two Availability Zones and its own Amazon RDS database. Amazon Route 53 uses latency-based routing to send users to both Regions. About 95% of users are in one Region’s geographic area. The business requires 99.9% availability, an RPO of 1 hour, and an RTO of 4 hours for a full-Region failure. The operations team wants to reduce cost and simplify operations while still meeting all requirements. Which architectural change should a solutions architect make?
Options:
Best answer: B
Explanation: The option that uses one primary multi-AZ Region with a smaller standby stack and cross-Region RDS replica in a secondary Region, combined with a Route 53 failover routing policy, is best.
Topic: Domain 2: Design Resilient Architectures
A company is moving several workloads to AWS. The architecture team wants managed data services that, by default, store data redundantly across multiple Availability Zones in a Region, without requiring the team to configure replication or manage underlying servers. Which services meet these requirements? (Select THREE.)
Options:
Correct answers: A, B, and D
Explanation: ✔ Amazon DynamoDB standard tables
DynamoDB is a fully managed NoSQL database service. Tables are regional resources, and AWS automatically replicates data across multiple Availability Zones within the Region. There is no need for the customer to set up replication, patch servers, or manage storage. This directly satisfies the requirements of automatic multi-AZ redundancy and minimal operational overhead.
✔ Amazon S3 Standard storage class
Amazon S3 is fully managed object storage. The S3 Standard storage class redundantly stores data across multiple AZs by default in a Region. Customers only manage buckets and objects; they do not manage servers, disks, or replication within the Region. This provides high durability and availability with no extra configuration, matching the scenario.
✔ Amazon Elastic File System (EFS) Standard
EFS Standard is a regional, fully managed file system that automatically stores data redundantly across multiple Availability Zones in the Region. Customers mount the file system; AWS manages the underlying infrastructure, scaling, and multi-AZ redundancy. No replication configuration or server management is required, so it meets all stated requirements.
Topic: Domain 4: Design Cost-Optimized Architectures
A company uses multiple AWS accounts under a single AWS Organizations management account. Leadership wants to attribute all EC2, Lambda, and container compute costs to teams and applications with minimal manual effort. Which actions SHOULD THE ARCHITECT AVOID? (Select TWO.)
Options:
Team cost allocation tag and send notifications when monthly compute spend exceeds thresholds.Team and Application, require them through tagging policies, and activate them as cost allocation tags in the management account.Correct answers: C and E
Explanation: The option that shares long-lived IAM access keys for the management account with each team should be avoided because it violates the principle of least privilege and creates major security and governance risks. Teams should instead assume scoped roles or use dashboards and reports provided centrally.
The option that stops tagging Lambda functions and container tasks should be avoided because it prevents fine-grained cost allocation by team and application. Serverless and container workloads are a significant part of compute spend and must be tagged consistently so they appear correctly in Cost Explorer, Cost Categories, and AWS Budgets filters.
Topic: Domain 1: Design Secure Architectures
A company uses a 1 Gbps AWS Direct Connect link to connect its on-premises data center to multiple VPCs. All hybrid traffic must be encrypted, production and dev VPCs must be isolated, and network costs should stay reasonable. Which TWO design choices should the architect AVOID? (Select TWO.)
Options:
Correct answers: A and C
Explanation: The option that relies on the private Direct Connect link without any additional IPsec or MACsec encryption ignores the explicit requirement that all hybrid traffic must be encrypted. Direct Connect is a dedicated connection but not inherently encrypted, so using it without an overlay encryption solution is a security violation.
The option that provisions two additional 10 Gbps Direct Connect links only for a low-throughput dev/test VPC introduces significant, unnecessary cost. For nonproduction, low-bandwidth use cases, a standard internet-based Site-to-Site VPN is typically sufficient and far more cost-effective. This design therefore violates the cost-awareness aspect of the requirements.
Use this map after the sample questions to connect individual items to the AWS architecture tradeoff decisions these practice samples test.
flowchart LR
S1["Business and technical requirement"] --> S2
S2["Design secure network and identity boundary"] --> S3
S3["Select compute storage database and integration services"] --> S4
S4["Apply reliability scalability and DR"] --> S5
S5["Optimize cost and operations"] --> S6
S6["Validate tradeoffs"]
| Cue | What to remember |
|---|---|
| Requirements first | Read for latency, availability, durability, RTO, RPO, compliance, budget, and operations constraints. |
| Service fit | Choose managed services when they reduce operational burden and meet requirements. |
| Security | Apply least privilege, encryption, private networking, logging, and segmentation. |
| Reliability | Use multi-AZ, backups, auto scaling, health checks, queues, and tested failover. |
| Cost | Right-size, use appropriate storage tiers, autoscale, reserve predictable usage, and avoid overbuilding. |