Try 65 free AWS CLF-C02 questions across the exam domains, with explanations, then continue with full IT Mastery practice.
This free full-length AWS CLF-C02 practice exam includes 65 original IT Mastery questions across the exam domains.
These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
Need concept review first? Read the AWS CLF-C02 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Domain | Weight |
|---|---|
| Cloud Concepts | 24% |
| Security and Compliance | 30% |
| Cloud Technology and Services | 34% |
| Billing, Pricing, and Support | 12% |
Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.
Topic: Security and Compliance
A security team uses AWS Identity and Access Management (IAM) Access Analyzer to review new IAM policies, identify unused or overly broad permissions, and adjust them before deployment. Which cloud security principle does this practice most directly support?
Options:
A. The principle of least privilege
B. The pay-as-you-go pricing model
C. The principle of global reach
D. The pillar of performance efficiency
Best answer: A
Explanation: IAM Access Analyzer is an AWS security tool that analyzes IAM policies to find resources that are shared outside your account or that have broader access than intended. Using it to remove unnecessary permissions ensures that identities and resources have only the access they truly need. This aligns directly with the security best practice of enforcing least privilege rather than granting wide, open-ended access.
Topic: Cloud Technology and Services
Which TWO of the following statements about using automation and repeatable deployment processes in AWS are INCORRECT? (Select TWO.)
Options:
A. Using different manual setup steps in each environment reduces the risk of deployment errors.
B. Version-controlled templates for infrastructure allow teams to review, track, and roll back changes to environments.
C. Automating deployments across development, test, and production helps keep configurations consistent between environments.
D. Repeatable deployment processes reduce the likelihood of human error affecting production releases.
E. Automated deployment pipelines help ensure the same steps are followed every time a new application version is released.
F. Infrastructure as code templates make it harder to recreate environments consistently, because each deployment is unique.
Correct answers: A and F
Explanation: Repeatable, automated deployment processes are a core benefit of operating in the AWS Cloud. By using tools such as infrastructure as code and deployment pipelines, teams can apply the same steps and configurations across development, test, and production environments. This consistency reduces human error, simplifies troubleshooting, and improves overall reliability of releases.
The incorrect statements suggest that manual, varied processes and avoiding infrastructure as code improve reliability, which is the opposite of modern best practices. In reality, automation and standardized templates are what help organizations achieve consistent, predictable environments and repeatable outcomes.
Topic: Security and Compliance
Which TWO are credential management best practices that help avoid hardcoding long‑term AWS access keys in application code? (Select TWO.)
Options:
A. Embed AWS access keys directly in the source code and rely on a private code repository for protection
B. Store sensitive values such as database passwords or API keys in AWS Secrets Manager and have the application retrieve them at runtime
C. Attach an IAM role to compute resources (such as Amazon EC2 instances or AWS Lambda functions) so the application uses temporary credentials provided by AWS
D. Place the AWS account root user access keys in a configuration file that is deployed with the application
E. Share a single set of long‑term access keys for an IAM user among all developers on the team
Correct answers: B and C
Explanation: Credential management best practices on AWS focus on avoiding long‑lived, hardcoded access keys and using managed mechanisms that provide temporary or centrally managed secrets. IAM roles let AWS handle temporary credentials automatically for compute services, while AWS Secrets Manager securely stores and rotates sensitive values so they are not embedded in code or configuration. Hardcoding or sharing long‑term access keys, especially for the root user, creates serious security risks and violates AWS recommended practices.
Topic: Cloud Technology and Services
A research organization must keep 500 TB of experiment data for at least 10 years to meet legal requirements. The data is rarely accessed, and if it is needed, they can wait up to 24 hours for retrieval, but they want very high durability and the lowest possible storage cost. Which of the following solutions will meet these requirements? (Select TWO.)
Options:
A. Store the data in Amazon S3 Glacier Flexible Retrieval.
B. Store the data in Amazon S3 Standard.
C. Store the data in Amazon S3 Glacier Deep Archive.
D. Store the data on Amazon EBS volumes attached to stopped Amazon EC2 instances.
E. Store the data in Amazon S3 Intelligent-Tiering.
F. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
Correct answers: A and C
Explanation: This scenario describes data that must be retained for many years, is almost never accessed, and can tolerate long retrieval times, while still requiring very high durability at the lowest possible cost. These are classic archive requirements, which map to Amazon S3 Glacier storage classes rather than standard or infrequent-access classes. Both S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive provide S3-level durability and are optimized for long-term, rarely accessed data with slower retrieval, which fits the 24-hour retrieval window.
✔ Store the data in Amazon S3 Glacier Flexible Retrieval: Archive-optimized with high durability and retrieval in minutes to hours, suitable when multi-hour restores are acceptable. ✔ Store the data in Amazon S3 Glacier Deep Archive: Lowest-cost S3 storage class for long-term archives, with retrieval within hours (well under 24 hours). ✖ Store the data in Amazon S3 Standard: Designed for frequently accessed data and is more expensive for long-term archives. ✖ Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA): Cheaper than S3 Standard but still not as cost-optimized as S3 Glacier classes for multi-year, rarely accessed data. ✖ Store the data in Amazon S3 Intelligent-Tiering: Adds monitoring charges and is intended for changing or unknown access patterns, not for data that is known to be almost never accessed. ✖ Store the data on Amazon EBS volumes attached to stopped Amazon EC2 instances: EBS is block storage for running workloads and is significantly more expensive and less scalable than S3 for archive use cases.
Topic: Security and Compliance
A company’s security team recently noticed a spike in suspicious SSH connection attempts to its Amazon EC2 instances. They already collect VPC Flow Logs and AWS CloudTrail logs in Amazon S3 but have no automated threat analysis. Which AWS service should they enable to continuously detect and prioritize potential security threats from this activity?
Options:
A. Amazon CloudWatch
B. AWS CloudTrail
C. Amazon GuardDuty
D. AWS Security Hub
Best answer: C
Explanation: The symptom is a spike in suspicious SSH attempts to EC2 instances, and the company already collects network and API logs but lacks automated security analysis. Amazon GuardDuty is designed for this situation: it is a managed threat detection service that continuously analyzes VPC Flow Logs, CloudTrail logs, and other data for signs of reconnaissance, brute-force attacks, and other malicious activity, then generates actionable security findings. This allows the security team to detect and prioritize potential threats without building or maintaining their own detection tooling.
Topic: Security and Compliance
A company recently enabled AWS CloudTrail and VPC Flow Logs in a new AWS account. The security team wants to automatically analyze this telemetry to detect suspicious activity with minimal setup and without building custom analytics tools. Which action best meets these requirements?
Options:
A. Enable Amazon GuardDuty for the account to continuously analyze AWS CloudTrail, VPC Flow Logs, and DNS logs for threats.
B. Create an Amazon Athena table over the CloudTrail logs and have analysts run ad hoc SQL queries for unusual events.
C. Use AWS Config rules to track configuration changes and send an email whenever a rule becomes noncompliant.
D. Deploy an intrusion detection system (IDS) on Amazon EC2 instances and manually configure it to ingest all log files.
Best answer: A
Explanation: Amazon GuardDuty is a managed threat detection service that continuously analyzes AWS CloudTrail management events, VPC Flow Logs, and DNS query logs to identify potential threats such as account compromise or reconnaissance. It requires only that you enable the service; AWS handles the analytics, threat intelligence, and scaling. This makes it well suited for organizations that already collect logs but want automated security insights without building and operating their own analytics pipelines.
Topic: Cloud Technology and Services
A company hosts a web server on an Amazon EC2 instance in a public subnet of a VPC. Users on the internet can start connecting, but their browsers hang and then time out. The instance’s security group allows inbound HTTP from 0.0.0.0/0 and all outbound traffic. The subnet’s network ACL allows inbound HTTP but denies all outbound traffic. What should a network engineer do to resolve the issue while keeping the security group unchanged?
Options:
A. Create a new security group that denies all outbound traffic and attach it to the EC2 instance.
B. Modify the subnet’s route table to add a VPC peering connection route to another VPC.
C. Update the subnet’s network ACL to allow outbound return traffic so the web server can send responses to clients.
D. Attach an IAM role with broader permissions, such as AmazonS3FullAccess, to the EC2 instance.
Best answer: C
Explanation: The web server’s security group already allows inbound HTTP from the internet and all outbound traffic. However, the subnet’s network ACL allows inbound HTTP but denies all outbound traffic, which blocks the server’s responses to clients and causes timeouts.
Security groups are stateful and attached to individual resources like EC2 instances, so if inbound traffic is allowed, the responses are automatically allowed. Network ACLs are stateless and operate at the subnet level, so you must explicitly allow both inbound and outbound traffic. Updating the subnet’s network ACL to allow outbound return traffic resolves the issue without changing the security group.
Topic: Cloud Technology and Services
Which of the following statements about Amazon Kendra is NOT correct?
Options:
A. Amazon Kendra can help users quickly find relevant answers by returning ranked passages and links instead of simple keyword matches.
B. Amazon Kendra can index content from multiple data sources, such as file shares, intranet sites, and knowledge bases, to provide unified search results.
C. Amazon Kendra is primarily designed as a relational database engine for storing structured transactional data.
D. Amazon Kendra is an intelligent search service that uses machine learning to search across unstructured content such as documents and FAQs.
Best answer: C
Explanation: Amazon Kendra is an AWS managed, machine learning–powered intelligent search service. It is designed to search across unstructured data such as documents, FAQs, wikis, and web pages and return highly relevant answers to users’ questions. It is not a database engine and is not meant for storing structured transactional data.
Kendra can connect to multiple enterprise data sources and index their content so users can search from one place. It uses natural language processing and relevance ranking to surface the most useful passages and links, helping users quickly find answers in large document collections.
Topic: Cloud Technology and Services
Which AWS service is specifically designed to let you define your infrastructure as code so you can repeatedly create consistent development, test, and production environments, reducing manual errors?
Options:
A. Amazon RDS
B. AWS CloudFormation
C. Amazon CloudFront
D. Amazon S3
Best answer: B
Explanation: AWS CloudFormation provides infrastructure as code, allowing you to define your AWS resources in templates and then create or update stacks from those templates. This means you can repeatedly provision development, test, and production environments in a consistent, automated way, which reduces manual setup errors and improves reliability. Services like Amazon S3, Amazon CloudFront, and Amazon RDS each address specific needs (storage, content delivery, databases) but do not orchestrate complete environment deployment from code templates.
Topic: Cloud Concepts
Which statement best describes how AWS managed services, such as Amazon RDS, reduce a customer’s operational burden?
Options:
A. Managed services primarily reduce network latency by placing resources closer to end users, but do not change operational tasks.
B. Customers are responsible for all operating system and database patching, while AWS only provides virtual hardware.
C. Using managed services means security controls are no longer needed because AWS fully secures everything by default.
D. AWS performs routine infrastructure tasks such as patching and backups for the service, so customers can focus on their applications and data.
Best answer: D
Explanation: AWS managed services, such as Amazon RDS, reduce operational burden by taking over routine infrastructure tasks like provisioning, patching, backups, and basic monitoring of the service. This allows customers to spend more time on application logic, data design, and business features instead of maintaining the underlying infrastructure. The shared responsibility model still applies, but the customer’s operational scope is smaller compared to self-managed deployments on raw compute resources.
Topic: Cloud Technology and Services
Which of the following statements about Amazon EC2 instance store and Amazon EBS is NOT correct?
Options:
A. Amazon EBS volumes can be backed up using snapshots, which are stored in Amazon S3 and help protect data against instance failure.
B. Data stored on an instance store volume is lost when the associated EC2 instance is stopped or terminated.
C. Amazon EBS volumes provide persistent block storage that can continue to exist even after the EC2 instance they are attached to is stopped or terminated.
D. Instance store volumes are network-attached storage that can be easily detached from one EC2 instance and reattached to another in the same Availability Zone.
Best answer: D
Explanation: Amazon EC2 instance store provides ephemeral block-level storage that is physically attached to the host server and lasts only for the lifetime of the instance. If the instance stops, terminates, or the underlying host fails, data on instance store is lost. Amazon EBS, in contrast, is network-attached block storage that persists independently of the EC2 instance and supports features like snapshots for backup.
The incorrect statement is the one that describes instance store as network-attached and easily detachable/reattachable between instances. That behavior matches Amazon EBS, not instance store. Understanding this distinction helps you choose the right storage type for temporary data versus long-lived, durable data.
Topic: Cloud Concepts
A company runs a stable, 24/7 web application on Amazon EC2, and its usage has changed very little over the last year. The company wants to reduce monthly compute costs by choosing appropriate AWS pricing options. Which approach is NOT an appropriate cost-optimization strategy for this usage pattern?
Options:
A. Purchase Reserved Instances that match the application’s long-term instance types and Regions to reduce hourly rates.
B. Identify any idle or underused instances, shut them down when not needed, and reserve capacity only for the steady baseline usage.
C. Commit to a 3-year Compute Savings Plan that covers the application’s predictable baseline compute usage.
D. Keep all instances running as On-Demand with no long-term commitment, because predictable usage means discounts are unnecessary.
Best answer: D
Explanation: Cloud economics encourages matching pricing models to usage patterns. For long-running, predictable workloads, commitment-based options such as Savings Plans or Reserved Instances typically reduce compute costs compared to always using On-Demand pricing. Rightsizing and turning off idle resources further avoid paying for unused capacity.
The choice that keeps everything On-Demand despite clear, stable usage fails to take advantage of available discounts and is therefore not an appropriate cost-optimization strategy. The other options all apply standard cost-optimization practices: using commitment-based discounts for predictable usage and reducing waste by eliminating idle resources.
Topic: Cloud Technology and Services
A company runs its web servers for an order-processing application on Amazon EC2, but the application connects over a VPN to a database that remains in the company’s on-premises data center. When the data center loses power, users can no longer place orders and managers ask why a “cloud application” is affected by an on-premises outage. What should a Cloud Practitioner explain about this deployment model?
Options:
A. The company is using an on-premises-only deployment model because the database is hosted in the data center.
B. The company is using a multi-cloud deployment model because it uses AWS and its own data center.
C. The company is using a purely cloud deployment model because all user-facing components run in AWS.
D. The company is using a hybrid deployment model because the application relies on both AWS and on-premises resources.
Best answer: D
Explanation: In this scenario, part of the application (the web servers) runs in AWS, while a critical dependency (the database) remains in the on-premises data center connected via VPN. When the data center fails, the whole application is impacted because it still depends on that on-premises component. A setup that combines AWS resources with on-premises infrastructure is called a hybrid deployment model. Explaining that this is a hybrid deployment clarifies why local outages still affect the “cloud” application.
Topic: Cloud Technology and Services
Which TWO statements correctly describe whether selected AWS services are global or Regional in scope? (Select TWO.)
Options:
A. Amazon S3 automatically stores every object in all AWS Regions, making it a fully global storage service by default.
B. Amazon Route 53 is a Regional service that must be set up separately in each AWS Region.
C. AWS Identity and Access Management (IAM) is a global service whose identities are not limited to a single Region.
D. Amazon EC2 is a Regional service, and instances run in specific Availability Zones within a Region.
E. AWS Regions are groups of edge locations that exist only to support Amazon CloudFront content delivery.
Correct answers: C and D
Explanation: AWS services can be either global (not tied to a specific Region) or Regional (resources exist in a particular Region and often an Availability Zone). IAM is a global service, so identities like users and roles apply across all Regions in the account. Amazon EC2 is Regional, because instances are created in specific Availability Zones inside a chosen Region. Understanding this distinction helps you plan where resources live and how they are managed.
Topic: Security and Compliance
A company stores sensitive customer reports in Amazon S3 using server-side encryption with a customer-managed key. A new analytics application is failing with AccessDenied errors when trying to decrypt these objects. The security team wants a managed way to create, control, and adjust access to the encryption keys so the app can read the data. Which AWS service should they use?
Options:
A. Use AWS Secrets Manager to store the S3 bucket name and give the application read access to that secret.
B. Use AWS CloudTrail to record the failed decryption attempts so the application can automatically retry.
C. Use AWS Key Management Service (AWS KMS) to centrally create and manage the customer-managed keys and grant the application permission to use them.
D. Use AWS Identity and Access Management (IAM) to create an inline policy that stores and encrypts all keys directly in IAM.
Best answer: C
Explanation: The symptom (AccessDenied when decrypting S3 objects) combined with the use of a customer-managed key indicates that the application’s identity does not have appropriate access to the KMS key. AWS Key Management Service (AWS KMS) is the managed service specifically designed for creating, storing, and controlling cryptographic keys used to protect data on AWS. By using AWS KMS, the security team can manage the customer-managed key and adjust permissions so the application can decrypt the S3 objects while maintaining centralized, auditable key control.
Topic: Cloud Technology and Services
A company is designing a new VPC and wants to use subnets to improve security by separating public-facing and internal resources. Which TWO subnet design choices should the network team AVOID because they weaken security or segmentation? (Select TWO.)
Options:
A. Place all web, application, and database servers in a single large public subnet to keep the design simple.
B. Create separate subnets for development and production workloads so they can have different security controls.
C. Place internet-facing web servers in a public subnet that routes to an internet gateway, while placing database servers in a private subnet with no direct internet route.
D. Host an internal HR application server in a public subnet that allows HTTPS access from any IPv4 address on the internet.
E. Use a network ACL to allow inbound internet traffic only to the public subnet and restrict it from reaching private subnets.
Correct answers: A and D
Explanation: Subnets are used inside an Amazon VPC to logically separate resources that have different network and security requirements. A common pattern is to place internet-facing resources, such as public web servers, in public subnets with a route to an internet gateway, and to place internal resources, such as databases or internal applications, in private subnets without direct internet access. This segmentation reduces the attack surface and lets you apply different security controls at the subnet and instance levels. Designs that put sensitive internal systems directly in public subnets or mix all tiers together in a single public subnet weaken these benefits and increase security risk.
Topic: Billing, Pricing, and Support
Which AWS compute pricing option is generally the most cost-effective for an application that will run at a steady, predictable level 24 hours a day for the next year?
Options:
A. Compute Savings Plans with a 1-year commitment
B. Dedicated Hosts
C. Spot Instances with no commitment
D. On-Demand Instances
Best answer: A
Explanation: For workloads that run continuously at a predictable level, AWS offers discounts when you commit to a certain amount of usage. Compute Savings Plans are designed for this pattern: you commit to a consistent amount of compute usage for a 1-year or 3-year term and, in return, receive lower prices compared to On-Demand. Options like Spot Instances or Dedicated Hosts target different needs (deep discounts with interruption risk, or compliance/licensing) and are not the generally recommended cost-effective default for a standard, always-on application.
Topic: Cloud Concepts
A company runs a public web application only in an AWS Region in North America. Users in Europe and Asia report slow page loads and the company wants to reduce latency for these users without requiring them to connect through the corporate network. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Deploy application stacks in additional AWS Regions that are geographically closer to major user groups.
B. Provision an AWS Direct Connect link from the company headquarters to the existing Region so that all internet users reach the application faster.
C. Deploy the application across multiple Availability Zones in the current Region to place resources closer to international users.
D. Configure Amazon CloudFront to cache static content at edge locations near users worldwide.
E. Use larger Amazon EC2 instance types in the existing Region to improve response times for distant users.
Correct answers: A and D
Explanation: AWS provides a global infrastructure of Regions and edge locations that lets you place your applications and content closer to users. Deploying workloads in multiple Regions near major user groups and using Amazon CloudFront to cache content at edge locations are common ways to reduce latency for global audiences. Simply making instances larger, adding Availability Zones in the same Region, or using private network links does not solve the fundamental distance problem for worldwide internet users.
Topic: Cloud Technology and Services
A company is launching a new web application on AWS. The web servers must be reachable from the internet, but the database must not be directly accessible from the internet while still allowing the web servers to connect to it. Which network design in Amazon VPC best meets these requirements?
Options:
A. Place both the web servers and the database in the same public subnet that has an internet gateway attached to the VPC.
B. Place the web servers in a public subnet with an internet gateway attached to the VPC, and place the database in a private subnet with no direct route to the internet.
C. Place the web servers in a private subnet and the database in a public subnet, and attach a NAT gateway to the private subnet.
D. Place the web servers and database in private subnets only, and use VPC peering to allow users on the internet to reach the web servers.
Best answer: B
Explanation: In an Amazon VPC, a public subnet is one that has a route to an internet gateway, allowing resources within it (such as web servers) to receive inbound traffic from the internet. A private subnet does not have a direct route to the internet, so resources like databases are shielded from direct internet access but can still be reached from other resources inside the VPC. By placing web servers in a public subnet and databases in a private subnet, you meet the requirement for public access to the web tier and restricted access to the database tier.
Topic: Cloud Concepts
Which statement best explains how the AWS Cloud enables rapid experimentation and innovation for organizations?
Options:
A. It allows teams to quickly provision and deprovision resources on demand with low upfront costs, so they can test and discard prototypes easily.
B. It limits the ability to delete test environments, encouraging teams to reuse the same infrastructure for every experiment.
C. It requires customers to purchase servers in advance, ensuring they fully own all infrastructure used for experiments.
D. It enforces multi-year infrastructure contracts so teams are committed to using the same resources for all experiments.
Best answer: A
Explanation: The AWS Cloud supports innovation by letting organizations provision compute, storage, and other resources in minutes, pay only for what they use, and then shut down resources when experiments are complete. This removes the need for large capital purchases and lengthy procurement cycles before testing new ideas. As a result, teams can iterate quickly, trying many prototypes and discarding those that do not work with minimal financial risk.
Topic: Cloud Concepts
A company plans to move several critical applications from its on-premises data center to AWS. It performs a migration readiness assessment involving stakeholder interviews and workshops. Which of the following statements about the purpose of this assessment is INCORRECT?
Options:
A. It provides a structured view of the organization’s current cloud readiness and a prioritized roadmap of improvements needed for successful migration.
B. It is mainly used to select exact Amazon EC2 instance types and storage sizes for each workload to migrate.
C. It helps identify gaps in cloud skills, operating processes, and governance that could slow down or block the migration.
D. It helps align business and technical stakeholders on migration objectives, expected benefits, and the changes required to operate in the cloud.
Best answer: B
Explanation: A migration readiness assessment is a high-level activity that helps an organization understand how prepared it is to adopt the AWS Cloud. It focuses on people, processes, and governance: where skills are missing, which processes need to change, and what governance or operating model updates are required. The output is typically a clear picture of current readiness, agreed priorities, and a roadmap of improvements, not low-level infrastructure design. Detailed decisions such as exact EC2 instance types or storage sizes come later, during migration planning and implementation.
The statement that the assessment is mainly used to select exact EC2 instance types and storage sizes confuses readiness assessment with technical sizing work. This makes it the incorrect description of what migration readiness assessments are for.
Topic: Cloud Concepts
A retail company runs its e-commerce website on AWS. Traffic is low most of the year but spikes sharply during holiday sales. The company wants to minimize costs while handling these peaks. Which approach is the LEAST appropriate because it does NOT take advantage of AWS Cloud elasticity and pay-as-you-go pricing?
Options:
A. Use AWS Lambda for certain request-processing tasks so that compute capacity scales automatically with the number of incoming requests.
B. Configure an Auto Scaling group to automatically add and remove Amazon EC2 instances based on CPU utilization thresholds.
C. Launch enough Amazon EC2 instances to handle peak holiday traffic and keep all of them running at full capacity all year, even when traffic is low.
D. Deploy Amazon CloudFront in front of the website to cache content closer to users and reduce load on the origin servers.
Best answer: C
Explanation: A key benefit of the AWS Cloud is elasticity, which allows you to automatically scale resources up and down as demand changes. This works together with pay-as-you-go pricing, where you pay primarily for the capacity you actually use instead of maintaining large, idle infrastructure. Keeping peak-level capacity running all year locks in high, mostly unnecessary costs and behaves like traditional fixed-capacity hardware rather than a cloud-optimized solution.
Topic: Cloud Concepts
Which of the following statements about how automation in AWS can help reduce costs is NOT correct?
Options:
A. Automating regular backups can reduce operational effort and the financial impact of data loss by ensuring important data is consistently protected without manual work.
B. Using Auto Scaling to automatically add and remove Amazon EC2 instances based on demand can help avoid paying for unused capacity.
C. Scheduling non-production Amazon EC2 instances to stop outside business hours can reduce compute charges by turning off resources when they are not needed.
D. Once automation is enabled, AWS automatically optimizes all resources for the lowest possible cost, so customers no longer need to monitor usage or review their bills.
Best answer: D
Explanation: Automation in AWS, such as Auto Scaling, scheduled start/stop of resources, and automated backups, helps organizations reduce waste and improve efficiency. These tools let you match capacity more closely to actual demand, avoid paying for idle resources, and reduce manual operational work.
However, automation does not mean that AWS fully manages or optimizes all costs on your behalf. Customers still need to design appropriate policies, monitor usage, and regularly review billing information to ensure they are using resources efficiently and taking advantage of the right pricing models.
Topic: Billing, Pricing, and Support
A company’s operations team currently checks the public Service health information to see if any AWS services are having issues. They now want a tool that shows only AWS incidents that impact their own AWS resources in their account. Which option best meets this requirement?
Options:
A. Use the AWS Health Dashboard – Service health view to see overall AWS service status.
B. Use the AWS Health Dashboard – Your account health view to see account-specific health events.
C. Use the AWS Trusted Advisor console to review account-specific operational events.
D. Create Amazon CloudWatch dashboards to monitor service health for all AWS customers.
Best answer: B
Explanation: The key requirement is to move from general, public AWS service status to account-specific health events that show which AWS incidents actually affect the company’s own resources. The AWS Health Dashboard includes a Your account health view that lists ongoing and historical events that impact that specific AWS account, such as service disruptions or maintenance affecting particular Regions or services. This directly addresses the need to highlight only relevant incidents instead of all global outages.
Topic: Cloud Technology and Services
A manufacturing company plans to connect thousands of factory sensors to AWS. They want a fully managed service that can securely connect and manage IoT devices and route their telemetry messages to other AWS services. Which AWS service should they use?
Options:
A. Amazon Kinesis Data Streams
B. AWS Lambda
C. Amazon Simple Notification Service (Amazon SNS)
D. AWS IoT Core
Best answer: D
Explanation: The key requirement in the scenario is a fully managed service that securely connects and manages IoT devices while routing their telemetry to other AWS services. AWS IoT Core is built for exactly this purpose: it supports secure device connectivity, device identity management, and messaging using IoT protocols such as MQTT, and it integrates natively with services like Amazon S3, Amazon Kinesis, and AWS Lambda.
Other services mentioned either handle messaging or compute, but they do not provide end-to-end IoT device connectivity and management. At the Cloud Practitioner level, whenever the scenario emphasizes connecting and managing large numbers of physical devices or sensors to AWS, AWS IoT Core is the appropriate choice.
Topic: Security and Compliance
A security analyst is choosing between key types for encrypting data in a new application. The team wants to minimize key configuration work, even if that means having less direct control over key settings.
Based on the following table, which choice best meets this requirement?
| Key type | Example | Configuration effort | Customer control |
|---|---|---|---|
| AWS-managed KMS key | aws/s3 | Low (AWS configures and rotates) | Limited (fewer settings to change) |
| Customer-managed KMS key | project-data-key | Higher (customer defines policies, rotation) | More (fine-grained settings) |
Which statement is most appropriate for the team’s goal?
Options:
A. Choose an AWS-managed KMS key because it has low configuration effort and provides limited direct control, as shown in the table.
B. Choose a customer-managed KMS key because it reduces configuration effort by having AWS manage key policies.
C. Choose a customer-managed KMS key because it removes customer responsibility for key rotation and configuration.
D. Choose an AWS-managed KMS key because it gives the most fine-grained control over key policies and rotation.
Best answer: A
Explanation: The table in the exhibit compares AWS-managed KMS keys and customer-managed KMS keys in terms of configuration effort and customer control. AWS-managed KMS keys have low configuration effort because AWS takes care of configuration and rotation, but they offer limited customer control. Customer-managed KMS keys require higher configuration effort because the customer defines key policies and rotation, but they provide more fine-grained control.
Because the team’s goal is to minimize key configuration work and they are willing to accept less direct control, the option that explicitly chooses an AWS-managed KMS key for its low configuration effort and limited control best matches the requirement. This aligns with the high-level difference: AWS-managed keys trade control for simplicity, while customer-managed keys trade simplicity for control and additional responsibility.
Topic: Billing, Pricing, and Support
Which of the following statements about AWS cost allocation tags is NOT correct?
Options:
A. AWS cost allocation tags automatically enforce spending limits for each tag, preventing tagged resources from exceeding the budget.
B. Both user-defined tags and AWS-generated tags can be used as cost allocation tags after they are activated for cost allocation.
C. AWS cost allocation tags can be used to categorize costs by project, department, or environment for reporting and chargeback.
D. To see tagged costs in the AWS Cost and Usage Report or Cost Explorer, you must activate the relevant cost allocation tags in the Billing and Cost Management console.
Best answer: A
Explanation: AWS cost allocation tags are a billing and cost management feature that let organizations label resources with business-related attributes, such as project, cost center, or environment. These tags are then used in cost reports and tools like Cost Explorer to understand and allocate spend. While tags are powerful for visibility and chargeback/showback, they do not directly enforce budgets or stop over-spending; that function is provided by tools such as AWS Budgets and other governance controls.
Topic: Cloud Concepts
A company reviews its AWS bill and discovers that many workloads use much less capacity than their current configurations provide. The finance team asks the cloud engineer to apply rightsizing to reduce monthly costs. Which of the following actions is NOT an example of rightsizing for cost optimization?
Options:
A. Replace all current EC2 instances with larger instance types so average CPU utilization stays below 5%, even for lightly used workloads.
B. Use Amazon CloudWatch metrics to identify EC2 instances with very low CPU utilization and move them to smaller instance types.
C. Move backup data that is rarely accessed from S3 Standard to lower-cost S3 Glacier Deep Archive storage classes.
D. Stop development and test EC2 instances outside business hours when they are not being used.
Best answer: A
Explanation: Rightsizing is the practice of matching resource size and configuration to actual usage so that you are not paying for more capacity or performance than you need. This often involves downsizing underutilized compute resources, choosing more appropriate storage classes, or turning off resources when they are idle. Increasing capacity far beyond what workloads require leads to overprovisioning and higher costs, which goes against cloud economics principles such as pay-as-you-go and cost optimization.
Topic: Security and Compliance
What is the primary purpose of AWS Artifact for customers?
Options:
A. To continuously scan customer AWS resources for misconfigurations and compliance drift
B. To aggregate security findings from multiple AWS services into a single dashboard
C. To provide best practice checks and real-time cost optimization recommendations
D. To provide on-demand access to AWS compliance reports and security-related agreements
Best answer: D
Explanation: AWS Artifact is a self-service, on-demand portal that gives customers access to AWS compliance reports and security and compliance-related agreements. Rather than scanning your environment, it documents how AWS itself complies with various standards, helping customers with their own audits and regulatory requirements. Customers can download reports like SOC or ISO certifications and review or accept standardized agreements directly in the console.
Topic: Cloud Concepts
A company wants to reduce its AWS bill by using features such as auto scaling to match capacity to demand, scheduled start and stop times for development instances, and automated lifecycle policies that move old backups to lower-cost storage and delete them when no longer needed. Which AWS Well-Architected pillar is most directly supported by these practices?
Options:
A. Operational Excellence
B. Performance Efficiency
C. Reliability
D. Cost Optimization
Best answer: D
Explanation: The practices described—auto scaling, scheduled start/stop for non-production instances, and automated lifecycle policies that move or delete old backups—are all about reducing unnecessary resource usage and aligning spending with actual demand. This is the essence of the AWS Well-Architected Cost Optimization pillar.
Auto scaling helps avoid overprovisioning by automatically adding capacity when needed and removing it when demand drops. Scheduling non-production resources to be off outside working hours cuts costs by eliminating idle usage. Lifecycle policies that move old backups to cheaper storage classes or delete data when it is no longer required reduce storage spend over time. Together, these are examples of using automation in AWS to implement cloud economics principles such as pay-as-you-go and eliminating waste.
Topic: Cloud Technology and Services
Which of the following statements about Amazon S3 storage classes are NOT accurate? (Select TWO.)
Options:
A. S3 Intelligent-Tiering can automatically move objects between access tiers to help optimize storage costs as access patterns change.
B. S3 Standard is appropriate for frequently accessed data that needs low latency and high durability across multiple Availability Zones.
C. S3 Glacier Flexible Retrieval is designed for sub-second access to frequently used data with no retrieval charges.
D. S3 Glacier Deep Archive is intended for long-term data that is accessed rarely and offers the lowest storage cost but the slowest retrieval times.
E. S3 Standard-IA is designed for infrequently accessed data and has lower storage cost than S3 Standard but adds per-GB retrieval charges.
F. S3 One Zone-IA stores data redundantly across multiple Availability Zones, providing higher resilience than S3 Standard.
Correct answers: C and F
Explanation: Amazon S3 offers multiple storage classes that balance cost, durability, and access patterns. Classes such as S3 Standard and S3 Standard-IA are multi-AZ and differ mainly by access frequency and pricing model, while archival classes like S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive trade much slower access times for very low storage cost. S3 Intelligent-Tiering adds automation by moving objects across tiers based on how often they are used, helping to optimize cost without needing detailed manual management.
Understanding which classes are multi-AZ, which are single-AZ, and which are archival with slow retrieval is key to selecting the right option and avoiding unsafe assumptions about durability and performance.
Topic: Cloud Concepts
A company runs its public website on two fixed-size Amazon EC2 instances behind an Application Load Balancer. Traffic is very spiky: during marketing campaigns, users see slow responses, and during quiet periods the servers are mostly idle. The company wants the application to automatically adjust capacity based on demand and avoid paying for unused compute, without a major redesign. Which change best meets these goals?
Options:
A. Move the application to a multi-Region active-active architecture, with a fixed number of EC2 instances in each Region.
B. Place the EC2 instances in an Auto Scaling group that automatically adds or removes instances based on utilization metrics.
C. Add several more EC2 instances and register them permanently with the load balancer to handle spikes.
D. Upgrade the existing EC2 instances to a larger instance type to handle peak traffic at all times.
Best answer: B
Explanation: Elasticity in the AWS Cloud means automatically matching the amount of resources to the current level of demand. When workloads are spiky or unpredictable, elastic services and features can scale out during busy times to maintain performance, and scale in during quiet times to reduce cost.
In this scenario, the company wants both automatic adjustment of capacity and reduced spending on idle resources, without a major redesign of the application. Using an Auto Scaling group for the existing EC2-based web tier directly addresses these goals by adding instances when utilization is high and removing them when utilization falls. This delivers the benefit of elasticity that is a key advantage of the AWS Cloud.
Topic: Billing, Pricing, and Support
A startup is designing a new web application on AWS and wants to understand its expected monthly charges before launching. The team plans to use several services such as Amazon EC2, Amazon RDS, and Amazon S3 and needs a detailed cost estimate for different configuration options. Which AWS tool should the team use to accomplish this goal?
Options:
A. AWS Budgets
B. AWS Billing and Cost Management console (Bills page)
C. AWS Cost Explorer
D. AWS Pricing Calculator
Best answer: D
Explanation: AWS Pricing Calculator is the primary AWS tool for estimating the cost of AWS services before they are deployed. It allows you to select services like Amazon EC2, Amazon RDS, and Amazon S3, choose specific configurations, and see an estimated monthly cost. This makes it well suited for planning and comparing different architectures or sizing options before committing to a design. Other cost tools focus more on tracking, analyzing, or controlling costs after resources already exist.
Topic: Cloud Concepts
Which of the following are ways that migrating IT workloads to the AWS Cloud can help an organization improve its environmental, social, and governance (ESG) performance? (Select TWO.)
Options:
A. Aligning costs with actual usage, which can encourage responsible resource use and help avoid running idle servers.
B. Ensuring that IT operations no longer need any internal governance or security controls because AWS provides them by default.
C. Reducing overall energy consumption by using AWS data centers that are typically more energy efficient than many on-premises facilities.
D. Guaranteeing that all workloads instantly produce zero carbon emissions, regardless of how they are designed or used.
E. Completely eliminating the organization’s responsibility for data privacy compliance because AWS manages all regulations.
Correct answers: A and C
Explanation: Migrating to the AWS Cloud can support ESG goals by improving energy efficiency and encouraging more responsible use of computing resources. AWS data centers are typically more energy efficient and more highly utilized than traditional on-premises data centers, which can reduce overall energy consumption and environmental impact. In addition, pay-as-you-go pricing and the ability to scale resources up and down make it easier to avoid idle servers and overprovisioning, supporting responsible resource use. However, migration does not remove customer responsibilities for privacy, governance, or security, nor does it automatically make all workloads zero carbon.
Topic: Billing, Pricing, and Support
When is the Amazon EC2 On-Demand Instances pricing model the most appropriate choice?
Options:
A. When running a new application with unpredictable usage patterns and a need to avoid long-term commitments
B. When running a batch job that can be interrupted at any time to minimize cost
C. When operating a database that will run at a constant load 24/7 for the next three years
D. When a workload requires dedicated physical servers for regulatory compliance and hardware isolation
Best answer: A
Explanation: Amazon EC2 On-Demand Instances let you pay for compute capacity by the hour or second with no long-term commitments, making them ideal for short-term, spiky, or unpredictable workloads. This flexibility allows you to start and stop instances as needed without planning long-term usage. Workloads that are interruptible, require dedicated hardware, or have very predictable long-term usage often benefit more from Spot, Dedicated, or Reserved options instead of pure On-Demand pricing.
Topic: Security and Compliance
Which of the following is NOT an example of a security responsibility that AWS manages under the AWS shared responsibility model?
Options:
A. Maintaining and patching the virtualization layer (hypervisor) that runs customer instances
B. Controlling physical access to AWS data center facilities
C. Configuring security groups and network ACLs for each customer’s Amazon VPC
D. Protecting the global network infrastructure that connects AWS data centers
Best answer: C
Explanation: Under the AWS shared responsibility model, AWS is responsible for security of the cloud, which includes the physical facilities, hardware, hypervisor, and global networking infrastructure that run AWS services. Customers are responsible for security in the cloud, such as configuring identity and access controls, network settings, and application-level protections within their AWS accounts. Because of this, AWS does not configure account-specific controls like security groups or network ACLs for customers; those are customer responsibilities.
Topic: Cloud Technology and Services
Which AWS service is specifically designed to provide push-based publish/subscribe (pub/sub) notifications to multiple subscribers, such as email, SMS, or HTTP endpoints, when an event occurs?
Options:
A. Amazon EventBridge
B. Amazon Simple Queue Service (Amazon SQS)
C. Amazon Simple Notification Service (Amazon SNS)
D. AWS Step Functions
Best answer: C
Explanation: Amazon Simple Notification Service (Amazon SNS) is the AWS service designed for push-based publish/subscribe messaging. With SNS, publishers send messages to a topic, and SNS immediately pushes those messages to all subscribed endpoints, such as email addresses, SMS phone numbers, or HTTP/S endpoints. This makes SNS ideal for alerts and fan-out notifications when events occur.
The other services listed handle messaging and integration in different ways (queues, event routing, or workflow orchestration) but are not intended for direct push notifications to multiple subscribers over common channels like email or SMS.
Topic: Security and Compliance
Which of the following statements about AWS Identity and Access Management (IAM) users and federated identities is NOT correct?
Options:
A. With identity federation, users can sign in with their existing corporate credentials and receive temporary AWS permissions without needing dedicated IAM users.
B. IAM users are identities that are created and managed directly in AWS IAM and exist only within an AWS account.
C. Federated identities require an external identity provider, such as a corporate directory or a third-party IdP, to authenticate users before they access AWS.
D. Federated identities are implemented by creating permanent IAM users for each external user and synchronizing their passwords into AWS.
Best answer: D
Explanation: AWS IAM users are native identities that are created and fully managed within an AWS account using IAM. In contrast, federated identities rely on an external identity provider, such as an organization’s directory or a third-party IdP, to authenticate users and then grant them temporary access to AWS resources via roles. Federation avoids the need to create and manage a separate permanent IAM user for each external user.
Topic: Security and Compliance
Which of the following statements about AWS access management approaches is NOT correct?
Options:
A. The recommended way for applications running on Amazon EC2 to access AWS services is to store long-term access keys in configuration files on the instance.
B. IAM roles can be used to delegate temporary permissions to resources in the same AWS account or in another AWS account.
C. IAM users are best for long-term credentials tied to a single person or application that needs direct access to an AWS account.
D. AWS IAM Identity Center (AWS Single Sign-On) lets you centrally manage workforce access to multiple AWS accounts using existing identities from a corporate directory.
Best answer: A
Explanation: The unsafe statement is the one that recommends storing long-term access keys in configuration files on an EC2 instance. AWS best practices strongly discourage embedding permanent credentials in code or on servers, because they are harder to rotate, easier to leak, and violate the principle of least privilege.
Instead, applications on Amazon EC2 should use an IAM role attached to the instance so that AWS automatically provides short-term, rotated credentials. IAM users are intended for long-lived identities such as individual people or specific applications, and AWS IAM Identity Center is used to centrally manage workforce access across multiple AWS accounts, often integrating with an existing corporate identity provider. IAM roles are also used to delegate temporary permissions within an account or across accounts without sharing long-term secrets.
Understanding when to use IAM users, IAM roles, and IAM Identity Center helps you design access patterns that are more secure, easier to manage, and aligned with least privilege principles.
Topic: Cloud Concepts
An online retailer runs a web application that usually has low traffic but experiences sudden, unpredictable spikes during flash sales. The company wants to meet demand during spikes while avoiding paying for unused capacity during quiet periods, with minimal manual effort. Which approach best uses the elasticity of the AWS Cloud to meet this goal?
Options:
A. Purchase long-term Reserved Instances for enough Amazon EC2 capacity to handle peak load and run this capacity continuously.
B. Run the application on Amazon EC2 instances in an Auto Scaling group that automatically adds or removes instances based on traffic.
C. Run a fixed fleet of Amazon EC2 instances sized for the highest possible traffic level at all times.
D. Manually launch additional Amazon EC2 instances before each flash sale and stop them afterward.
Best answer: B
Explanation: Elasticity in the AWS Cloud means the ability to automatically increase and decrease resources to match current demand. For an application with mostly low traffic and occasional unpredictable spikes, the most cost-effective approach is to scale out when needed and scale back in afterward without manual effort.
Using an Auto Scaling group for Amazon EC2 lets AWS add instances during flash sales and remove them when traffic returns to normal. This way, the retailer pays for extra capacity only during busy periods and avoids overprovisioning at other times.
Topic: Security and Compliance
A company runs a public-facing web application on AWS and wants to improve its security using managed AWS security services. Which of the following actions is NOT an appropriate security best practice?
Options:
A. Enable AWS WAF on the Application Load Balancer to filter and block common web exploits.
B. Aggregate security findings from services such as GuardDuty and AWS WAF into AWS Security Hub for centralized visibility.
C. Turn on Amazon GuardDuty in the AWS account to continuously monitor for malicious activity and unauthorized behavior.
D. Attach the AdministratorAccess managed policy to all developer IAM users so they can address any security issues without restrictions.
Best answer: D
Explanation: The scenario asks for the one action that is not a security best practice. AWS recommends using managed security services such as AWS WAF, Amazon GuardDuty, and AWS Security Hub to protect applications and monitor for threats. However, these tools must be combined with strong identity and access management practices, particularly the principle of least privilege.
Granting broad, unrestricted administrative permissions to many users creates unnecessary risk. Instead, IAM users and roles should be given only the permissions they need to perform specific tasks, which limits the impact of accidental changes or compromised credentials.
Topic: Security and Compliance
Which TWO of the following statements about the AWS shared responsibility model are INCORRECT and could lead to security gaps if believed? (Select TWO.)
Options:
A. Customers are responsible for creating and managing AWS Identity and Access Management (IAM) policies that control who can access their AWS resources.
B. AWS is responsible for securing the physical facilities, hardware, and networking that run AWS Regions and Availability Zones.
C. For Amazon EC2 instances, customers are responsible for managing the guest operating system, including OS-level patches, firewall rules, and application configuration.
D. After enabling default encryption for an Amazon RDS database, AWS is responsible for managing all database user accounts and permissions inside the database engine.
E. For managed services such as Amazon S3, AWS is fully responsible for configuring bucket access permissions on behalf of customers.
Correct answers: D and E
Explanation: The AWS shared responsibility model splits duties between AWS and the customer. AWS secures the underlying cloud infrastructure, such as data centers, hardware, and core networking. Customers are responsible for security “in” the cloud, which includes configuring access controls, managing identities, and securing the operating systems and applications they run on AWS services. Misunderstanding this split can leave resources misconfigured and vulnerable.
Topic: Cloud Technology and Services
Which AWS service is specifically designed as an in-memory data store to cache frequently accessed data and improve application performance?
Options:
A. Amazon RDS
B. Amazon DynamoDB
C. Amazon S3
D. Amazon ElastiCache
Best answer: D
Explanation: Amazon ElastiCache is an in-memory data store and caching service that keeps frequently accessed data in memory so applications can retrieve it with microsecond latency. This reduces the number of requests that hit the underlying databases or storage, improving performance and scalability. It is commonly used to cache query results, session data, and other hot data.
Other services like Amazon RDS, Amazon DynamoDB, and Amazon S3 are primary data stores, not dedicated caching layers, and typically have higher latency than an in-memory cache.
Topic: Cloud Concepts
Which TWO statements best describe how migration readiness assessments help organizations plan their AWS Cloud adoption? (Select TWO.)
Options:
A. They eliminate the need for further planning workshops or stakeholder alignment once completed.
B. They identify gaps in people, processes, and governance that should be addressed before large-scale migration.
C. They help organizations prioritize migration activities by showing which areas are more and less prepared for the cloud.
D. They provide a detailed, service-by-service architecture design for every future workload.
E. They are only useful after most workloads have already been moved to the AWS Cloud.
Correct answers: B and C
Explanation: A migration readiness assessment is a structured way to evaluate how prepared an organization is to adopt the AWS Cloud across areas such as people, processes, technology, and governance. It surfaces gaps in skills, operating models, and decision-making that can slow or block migration. The results are then used to build a prioritized action plan and migration roadmap so the organization can address weaknesses before moving many workloads.
Topic: Cloud Technology and Services
Which TWO statements about automated deployment processes in the AWS Cloud are true? (Select TWO.)
Options:
A. Automated deployments cannot be reused across multiple AWS accounts, so each environment must always be configured separately by hand.
B. Storing deployment templates in version control systems makes it easier to track, review, and roll back infrastructure changes.
C. Automated deployment tools such as AWS CloudFormation help ensure that environments are created in a consistent way every time.
D. Manual configuration through the AWS Management Console is usually better than automation for large, frequently updated environments because it reduces setup time.
E. Automation is only useful for application code; infrastructure components such as networking and storage must always be configured manually.
Correct answers: B and C
Explanation: Automated deployment processes in AWS, often implemented with Infrastructure as Code tools like AWS CloudFormation or AWS Cloud Development Kit (AWS CDK), create resources from templates instead of clicking through the console. Because the same template is used each time, environments are more consistent and easier to reproduce, improving maintainability and reducing configuration mistakes.
Keeping these templates in a version control system such as Git adds governance benefits: every change is recorded, can be reviewed, and can be rolled back if needed. Manual, one-time setup through the AWS Management Console is still useful for experimentation or very small environments, but it does not scale well when you need frequent, consistent updates across multiple environments or accounts.
Topic: Security and Compliance
A company has just created a new AWS account. Several team members are discussing how to manage and use the AWS account root user. Which of the following actions related to the root user should the team AVOID? (Select TWO.)
Options:
A. Enable multi-factor authentication (MFA) on the root user and store the MFA device securely offline.
B. Sign in as the root user only when performing rare account-level tasks that explicitly require it, such as changing the account name or closing the account.
C. Create long-term access keys for the root user and embed them in application configuration files for programmatic access to AWS resources.
D. Share the root user password and MFA device among several senior administrators so that anyone can sign in with full account access when needed.
E. Use IAM users or roles with administrative permissions for daily management tasks instead of signing in as the root user.
Correct answers: C and D
Explanation: The AWS account root user has unrestricted access to all resources and billing settings in the account, so it must be protected carefully and used only when absolutely necessary. Best practices include enabling MFA on the root user, never creating long-term root access keys, not sharing root credentials, and using IAM users or roles for all routine tasks. Misusing the root user, such as by embedding its keys in code or sharing its password, dramatically increases the impact of any compromise and violates least-privilege principles.
Topic: Billing, Pricing, and Support
A finance analyst is reviewing a simplified monthly billing summary for the company’s Amazon S3 storage usage in a single AWS Region.
Exhibit: Simplified storage and cost summary
| Total storage (GB) | Total monthly cost |
|---|---|
| 100 | USD 2.00 |
| 500 | USD 8.00 |
| 1,000 | USD 12.00 |
Based only on the information in this exhibit, which statement is the most accurate conclusion about how this AWS service is priced?
Options:
A. The service becomes more expensive per GB as more data is stored, so the per-GB cost increases with higher usage.
B. The service charges a flat monthly fee that does not depend on how much data is stored.
C. The service uses tiered pricing, where the per-GB cost decreases as total storage usage increases.
D. The service charges a fixed price per GB, so the per-GB cost stays the same at all usage levels.
Best answer: C
Explanation: The exhibit shows three data points where total stored data and total monthly cost both increase. However, the increases are not proportional. At 100 GB, the cost is USD 2; if pricing were strictly linear, 1,000 GB would cost USD 20. Instead, 1,000 GB costs USD 12, which means the effective price per GB goes down as usage goes up.
This pattern is characteristic of tiered pricing, where the first portion of usage is billed at one rate and higher usage tiers are billed at lower rates. Many AWS services, including Amazon S3 storage and data transfer in some cases, use this model so that higher-volume customers pay a lower per-unit price as their consumption increases.
Some of the incorrect interpretations either assume a flat fee that never changes with usage or a fixed per-GB rate that scales linearly. The numbers in the table directly contradict those assumptions and instead support a tiered, decreasing per-unit cost model.
Topic: Security and Compliance
A company’s compliance team needs the latest AWS SOC audit reports and wants the ability to review and accept standard data protection agreements with AWS on demand. They consult an internal wiki page that summarizes several AWS tools:
| AWS tool | Primary use |
|---|---|
| AWS Artifact | On-demand access to AWS compliance reports and certain security agreements |
| AWS CloudTrail | Record account API activity for security analysis and auditing |
| AWS Security Hub | Central view of security findings from multiple AWS services |
| AWS IAM | Manage users, groups, roles, and permissions for AWS resources |
Based on this information, which AWS service should the team use to download compliance reports and review standard agreements with AWS?
Options:
A. AWS Identity and Access Management (IAM)
B. AWS Security Hub
C. AWS Artifact
D. AWS CloudTrail
Best answer: C
Explanation: The exhibit’s table explicitly states that AWS Artifact provides on-demand access to AWS compliance reports and certain security agreements. This matches the team’s need to download audit reports such as SOC reports and to review and accept standard data protection agreements with AWS. Other listed services are focused on logging, security findings, or access control within the customer’s AWS environment, not on AWS’s own compliance documentation.
Topic: Security and Compliance
A company is expanding its use of AWS and wants to build long-term, in-house security expertise across its security, operations, and development teams. Leaders want structured guidance on AWS security best practices plus hands-on labs led by AWS experts. Which option is the most appropriate way to meet this need?
Options:
A. Open AWS Support cases whenever the team has security questions so AWS engineers can advise them.
B. Purchase third-party firewall appliances from AWS Marketplace and deploy them in every VPC to improve security.
C. Rely on AWS Trusted Advisor security checks to automatically identify and fix all security issues without additional training.
D. Send staff to AWS re:Inforce and AWS security-focused workshops to learn best practices and complete hands-on labs with AWS experts.
Best answer: D
Explanation: In this scenario, the company wants to develop broad, long-term security skills across multiple teams, with both best-practice guidance and hands-on experience directly from AWS experts. AWS re:Inforce and AWS security-focused workshops are specifically designed for this purpose, offering sessions, labs, and networking focused on AWS security. Tools such as AWS Support, Trusted Advisor, or third-party firewalls can improve security posture, but they do not inherently build the internal knowledge and skills the question emphasizes.
Topic: Cloud Concepts
Which TWO statements describe how AWS Well-Architected reviews help organizations continuously improve their workloads over time? (Select TWO.)
Options:
A. They help teams create and track improvement plans so that recommended changes can be implemented gradually, not just during a single review.
B. They act as a one-time certification that a workload is fully compliant with all AWS best practices and never needs to be reviewed again.
C. They permanently transfer operational responsibility for the workload from the customer to AWS once the review is complete.
D. They provide a repeatable process to review workloads against the AWS Well-Architected Framework and identify areas for ongoing improvement.
E. Their primary purpose is to generate a detailed monthly AWS bill so finance teams can reconcile cloud spending.
Correct answers: A and D
Explanation: AWS Well-Architected reviews use the AWS Well-Architected Framework pillars to regularly assess how a workload is designed and operated. They are meant to be repeated over the life of a workload so teams can identify risks, create improvement plans, and track progress over time. This makes the process a continuous improvement tool rather than a one-time audit or certification. The reviews do not change who operates the workload or directly produce billing artifacts; they guide better design and operation decisions, including around cost.
Topic: Security and Compliance
A healthcare startup is preparing for an upcoming compliance audit and begins using AWS Artifact. The security lead creates several plans for how the team will use the service.
Which TWO actions are INCORRECT uses of AWS Artifact in this scenario? (Select TWO.)
Options:
A. Download AWS compliance reports (such as SOC and PCI reports) to share with their external auditor.
B. Use AWS Artifact as a central dashboard for real-time alerts and notifications about security incidents in their AWS accounts.
C. Check which AWS Regions and services have specific certifications before deploying regulated workloads.
D. Upload their own internal security policies and audit evidence into AWS Artifact so AWS can review and approve their compliance posture.
E. Accept compliance-related legal agreements with AWS, such as a Business Associate Addendum (BAA), directly through the service.
Correct answers: B and D
Explanation: AWS Artifact is a self-service portal that provides on-demand access to AWS compliance reports (such as SOC, PCI, and ISO reports) and certain compliance-related agreements (such as BAAs). It is designed to help customers understand AWS’s own compliance posture and obtain documentation for audits, not to manage real-time security operations or store customer evidence.
Using AWS Artifact for tasks like downloading AWS audit reports, reviewing certifications, and accepting legal agreements aligns directly with its purpose. By contrast, treating it as a real-time security dashboard or a repository for customer documents misunderstands the service and fails to meet security and governance needs.
Topic: Billing, Pricing, and Support
Which statement best describes how AWS Savings Plans differ from purchasing specific Amazon EC2 Reserved Instances (RIs)?
Options:
A. Savings Plans only apply to Amazon S3 and Amazon RDS usage, while RIs only apply to Amazon EC2 usage.
B. Savings Plans apply a discount to any eligible compute usage that matches the plan (such as different EC2 instance families or AWS Fargate), while specific RIs mainly discount the exact instance attributes you reserved.
C. Savings Plans lock you to a single EC2 instance type in one Availability Zone, while RIs automatically apply to any compute service in the account.
D. Savings Plans remove the need for any spending commitment, while RIs always require a 3-year commitment with all costs paid upfront.
Best answer: B
Explanation: AWS Savings Plans provide discounts in exchange for committing to a consistent amount of compute usage, measured as a /hour spend, and can automatically apply across eligible compute usage such as multiple EC2 instance families or AWS Fargate. Specific Reserved Instances generally apply their discount to a particular EC2 configuration (such as instance family, size, operating system, and region), making them less flexible than Savings Plans.
Topic: Cloud Technology and Services
A company wants to reduce read latency for frequently accessed data from its existing relational database by storing the most commonly used data in an in-memory layer. Which AWS service is most appropriate for this caching use case?
Options:
A. Amazon DynamoDB
B. Amazon ElastiCache
C. Amazon RDS
D. Amazon S3
Best answer: B
Explanation: The scenario describes a need for an in-memory caching layer in front of an existing relational database to reduce read latency. Amazon ElastiCache is designed exactly for this purpose: it is a fully managed, in-memory data store and cache service that can hold frequently accessed data for very fast reads.
Other database and storage services like Amazon RDS, Amazon DynamoDB, and Amazon S3 serve different primary roles (relational database, NoSQL database, and object storage) rather than acting as a dedicated cache layer to offload reads from another database.
Topic: Cloud Concepts
A company runs a single Amazon EC2 instance 24/7 for a stable, predictable production workload. The instance currently uses On-Demand pricing. The company wants to reduce compute costs without changing the application architecture or accepting interruption risk. Which option is the most appropriate approach?
Options:
A. Upgrade the instance to a larger instance type so fewer instances are required.
B. Purchase a 1-year Compute Savings Plan that matches the instance’s usage.
C. Switch the instance to Spot pricing to minimize hourly cost.
D. Migrate the workload to a Dedicated Host to share hardware with multiple instances.
Best answer: B
Explanation: For a stable, always-on workload, committing to a certain level of compute usage is a classic AWS cost-optimization design principle. A 1-year Compute Savings Plan provides a discount compared to On-Demand pricing in exchange for a usage commitment, without requiring changes to the application or risking interruptions. This directly addresses the goal of reducing cost while keeping the architecture and reliability the same.
Spot pricing is designed for flexible, interruptible workloads such as batch jobs, not for critical 24/7 production services. Dedicated Hosts are intended for compliance or specific licensing scenarios and are usually more expensive than standard instances. Upsizing the instance type generally increases cost and does not leverage AWS pricing models or rightsizing for optimization.
Topic: Cloud Technology and Services
Which TWO statements correctly describe how using multiple AWS Availability Zones can improve application availability and fault tolerance? (Select TWO.)
Options:
A. Using multiple Availability Zones lets you design active-passive or active-active architectures that improve fault tolerance.
B. Using multiple Availability Zones guarantees 100% uptime for any application without additional configuration.
C. Using multiple Availability Zones automatically moves all existing resources into a different Region during an outage.
D. Deploying application servers in more than one Availability Zone helps the application keep running even if one data center fails.
E. Using multiple Availability Zones means you no longer need to create backups of your data.
Correct answers: A and D
Explanation: Availability Zones are physically separate data centers within an AWS Region, connected with low-latency networking. By deploying redundant resources across multiple Availability Zones, you avoid a single data center becoming a single point of failure. If one AZ experiences an issue, resources in the other AZs can continue to serve requests, improving both availability and fault tolerance.
Topic: Cloud Technology and Services
A financial analytics startup is moving several workloads to Amazon EC2. One workload runs large CPU-bound risk simulations, and another keeps very large datasets in memory for fast interactive analysis. They want to choose EC2 instance categories that match each workload’s main performance need instead of using a one-size-fits-all option. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Run the CPU-bound risk simulations on compute-optimized Amazon EC2 instances.
B. Run the in-memory analytics workload on memory-optimized Amazon EC2 instances.
C. Run both workloads on general purpose Amazon EC2 instances to keep the architecture simple.
D. Use memory-optimized Amazon EC2 instances for the CPU-bound simulations and general purpose instances for the in-memory analytics.
E. Use storage-optimized Amazon EC2 instances for both workloads because they work with large datasets on disk.
Correct answers: A and B
Explanation: This question tests how to match EC2 instance categories to workload characteristics at a high level. Compute-optimized instances suit CPU-intensive jobs, while memory-optimized instances are best when you need large amounts of RAM and fast access to data in memory. General purpose and storage-optimized instances focus on different balances and are not ideal for these specific workloads.
Topic: Security and Compliance
A company is training staff on the AWS shared responsibility model. They want an example of a security task where AWS and the customer each handle different layers of the same activity (a shared control). Which example best illustrates this?
Options:
A. The customer encrypts application data before storing it locally on on-premises file servers.
B. The customer defines security groups and network ACL rules for their VPC.
C. AWS designs, builds, and operates the global network, including all routing and switching.
D. AWS patches the physical servers and networking devices, while the customer patches the guest operating systems and applications on Amazon EC2 instances.
Best answer: D
Explanation: In the AWS shared responsibility model, some controls are fully AWS’s responsibility (such as physical security of data centers), some are fully the customer’s responsibility (such as securing applications and data), and some are shared. Shared controls are areas like patch management, where AWS secures the underlying infrastructure while the customer secures what they run on top.
Patch management is a classic example: AWS patches and maintains the physical hosts, networking devices, and hypervisors, while customers patch their guest operating systems, middleware, and applications. This split illustrates how both parties contribute to the same overall security function, but at different layers.
Topic: Cloud Concepts
A company runs a development environment on several Amazon EC2 instances that are only used during business hours. The team currently leaves the instances running 24/7 and wants to reduce costs without changing instance sizes or performance when developers are working. They plan to use an AWS tool to automatically stop the instances at night and restart them in the morning. Which cloud economics concept best explains this cost-related benefit?
Options:
A. Automation
B. Variable costs instead of upfront capital costs
C. Rightsizing
D. Economies of scale
Best answer: A
Explanation: The scenario describes EC2 instances that are not needed 24/7, and the company wants to automatically stop them when they are idle to save money. This is a classic example of using automation in the cloud to reduce wasted usage and therefore lower costs, without changing instance size or performance during working hours. AWS offers several ways to automate such schedules, such as AWS Systems Manager Automation or EventBridge with Lambda, illustrating the cloud economics benefit of automation.
Other concepts like rightsizing, variable costs, and economies of scale are important in cloud economics, but they do not directly match the described action of scheduling stop/start based on time of day.
Topic: Cloud Technology and Services
A company runs a simple web application in a single subnet. The web server and its database are both in this subnet, and the subnet has a route to an internet gateway. The company wants to improve security by keeping the website publicly accessible while preventing direct internet access to the database. Which network design change best meets these goals?
Options:
A. Create a new public subnet for the database and move the database into it, while keeping the web server in the existing subnet.
B. Create separate security groups for the web server and database, but keep both in the existing subnet that has a route to the internet gateway.
C. Create private subnets for the database (and any internal application servers) with no direct route to the internet gateway, keep the web server in a public subnet, and use security groups to allow only the web server to reach the database.
D. Create a second VPC for the database and connect the VPCs with VPC peering, leaving both subnets with routes to their own internet gateways so they can be managed remotely.
Best answer: C
Explanation: Subnet segmentation in Amazon VPC allows you to separate public-facing resources from internal resources at the network layer. A common pattern is to place web servers or load balancers in public subnets that have routes to an internet gateway, and databases or application servers in private subnets that have no direct internet route.
By combining this subnet design with tightly scoped security groups, you ensure that only specific traffic—for example, from the web tier to the database tier—is allowed. This reduces the attack surface while still allowing users to access the public website over the internet.
Topic: Cloud Technology and Services
A company has users around the world who stream training videos stored in an Amazon S3 bucket. They want to cache these video files at AWS edge locations to reduce latency for viewers.
Based on the following table of AWS network and edge-related services, which service best meets this requirement?
| Service | Primary capability |
|---|---|
| Amazon CloudFront | Caches and delivers web content from edge locations (CDN) |
| AWS Global Accelerator | Improves network path to application endpoints (no content cache) |
| AWS Direct Connect | Provides dedicated private network connection to AWS |
| Amazon Route 53 | Provides DNS-based domain name resolution and traffic routing |
Options:
A. AWS Direct Connect
B. Amazon Route 53
C. AWS Global Accelerator
D. Amazon CloudFront
Best answer: D
Explanation: The requirement is to cache video files at AWS edge locations so that global users experience lower latency when streaming content. In the table, the row for Amazon CloudFront lists its primary capability as “Caches and delivers web content from edge locations (CDN)”, which directly aligns with this need. A content delivery network (CDN) like CloudFront is specifically designed to store copies of content closer to users at edge locations.
In contrast, the row for AWS Global Accelerator explicitly notes that it “Improves network path to application endpoints (no content cache)”, which means it optimizes routing but does not store content at the edge. AWS Direct Connect and Amazon Route 53 also do not provide caching; they address connectivity and DNS resolution, respectively. Therefore, Amazon CloudFront is the best choice based on the information in the exhibit.
Topic: Cloud Technology and Services
Which TWO of the following statements about Amazon S3 Glacier storage classes are NOT correct? (Select TWO.)
Options:
A. Different Amazon S3 Glacier storage classes let you choose between lower storage cost or faster retrieval, depending on your archival needs.
B. Objects stored in Amazon S3 Glacier storage classes cannot be retrieved once they have been archived.
C. Amazon S3 Lifecycle policies can automatically move objects from S3 Standard into an Amazon S3 Glacier storage class based on age.
D. Amazon S3 Glacier storage classes are intended for primary, frequently updated databases that require constant low-latency access.
E. Amazon S3 Glacier storage classes are primarily designed for long-term archival of infrequently accessed data at a lower storage cost.
Correct answers: B and D
Explanation: Amazon S3 Glacier storage classes are designed for long-term archival of data that is accessed infrequently, offering very low storage cost in exchange for longer and potentially more expensive retrieval. Within the S3 Glacier family, you can choose between storage classes that favor the lowest cost or faster retrieval times. Archived objects remain retrievable; you can restore them when needed, often coordinated through lifecycle policies that move data from S3 Standard to Glacier classes over time. S3 Glacier is not meant to host primary, frequently updated application data that requires constant low-latency access.
Topic: Billing, Pricing, and Support
A company runs a customer-facing web application on Amazon EC2. One instance runs 24/7 and is expected to be needed for at least the next year. Additional EC2 instances are started only during unpredictable traffic spikes a few days each month, and interruptions to any instances are unacceptable. The company wants to minimize compute costs while keeping flexibility for the spiky usage. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Purchase a 1-year Compute Savings Plan sized to cover one always-on EC2 instance and run any additional spike instances as On-Demand.
B. Run all EC2 instances, including the always-on instance, using only On-Demand pricing.
C. Purchase a 1-year Compute Savings Plan sized for the maximum expected number of EC2 instances during traffic spikes.
D. Purchase a 1-year Standard Reserved Instance for one EC2 instance and run any additional spike instances as On-Demand.
E. Use Spot Instances for the always-on EC2 instance to reduce the hourly price.
Correct answers: A and D
Explanation: For a workload with a known, steady baseline and unpredictable additional capacity, it is cost-effective to commit to the baseline usage to get a lower rate, while keeping burst capacity on flexible On-Demand pricing.
Topic: Security and Compliance
Which TWO of the following statements about AWS security and compliance services are INCORRECT? (Select TWO.)
Options:
A. AWS Shield Advanced is primarily used to manage user identities and permissions across multiple AWS accounts.
B. AWS CloudTrail records API activity in your AWS account and can be used for auditing access.
C. AWS Artifact automatically scans your AWS resources for security misconfigurations and sends security alerts.
D. AWS Security Hub can aggregate findings from multiple AWS security tools to give you a centralized view of security alerts.
E. AWS Key Management Service (AWS KMS) lets you create and manage encryption keys that can protect data stored in other AWS services.
F. Amazon GuardDuty helps detect suspicious activity in your AWS accounts and workloads by analyzing logs and network data.
Correct answers: A and C
Explanation: The question focuses on recognizing the purpose of key AWS security and compliance services and identifying statements that misdescribe what those services do. At the Cloud Practitioner level, it is important to know that AWS provides separate services for logging, key management, threat detection, compliance documentation, and DDoS protection.
AWS Artifact is a documentation and compliance-report portal, not a scanning or alerting tool. AWS Shield Advanced is a managed DDoS protection service, not an identity and access management service. In contrast, CloudTrail, AWS KMS, Amazon GuardDuty, and AWS Security Hub each match the described use cases accurately.
Topic: Cloud Technology and Services
Which AWS service provides object storage that is commonly used for backups, static website content, and data lakes?
Options:
A. Amazon S3
B. Amazon RDS
C. Amazon EFS
D. Amazon EBS
Best answer: A
Explanation: Amazon S3 is the AWS object storage service designed to store and retrieve virtually any amount of data from anywhere on the web. It is commonly used for use cases such as backups, serving static website files, and building data lakes because it is highly durable, scalable, and integrates with many other AWS analytics services. Other storage services like Amazon EBS, Amazon RDS, and Amazon EFS are optimized for different storage types (block, relational database, and file), not for general-purpose object storage.
Topic: Security and Compliance
A financial services company is preparing for a security audit. The auditors require official, AWS-published guidance on how to configure encryption and identity and access management (IAM) according to AWS best practices, and the security team also wants documentation they can reference in future audits. Which option is the MOST appropriate primary resource for these needs?
Options:
A. Use the AWS Security Documentation and the AWS Well-Architected Framework Security Pillar to review AWS-published best practices and configuration guidance.
B. Open a technical support case with AWS Support and ask them to produce and approve the company’s full security architecture design for the audit.
C. Search the AWS Knowledge Center for troubleshooting articles related to encryption and IAM and follow the example steps provided.
D. Post detailed questions on AWS re:Post to gather advice and examples from the community and AWS experts.
Best answer: A
Explanation: For audit and compliance purposes, companies need authoritative, AWS-published documentation that describes how to secure their workloads according to AWS best practices. The AWS Security Documentation, along with the Security Pillar of the AWS Well-Architected Framework, provides official guidance on topics such as encryption, IAM, logging, and monitoring. These resources are explicitly published and maintained by AWS, making them suitable references during security audits. Community Q&A and troubleshooting articles are helpful supplements but are not a substitute for formal AWS best-practice documentation.
In this scenario, the key requirements are official AWS guidance, coverage of best practices for security configuration, and documentation that can be shown to auditors. Only the official security documentation and Well-Architected security guidance fully meet all these needs at once.
Use the AWS CLF-C02 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AWS CLF-C02 on Web View AWS CLF-C02 Practice Test
Read the AWS CLF-C02 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.