AWS SOA-C03: Networking and Content Delivery

Try 10 focused AWS SOA-C03 questions on Networking and Content Delivery, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AWS SOA-C03 on Web View full AWS SOA-C03 practice page

Topic snapshot

FieldDetail
Exam routeAWS SOA-C03
Topic areaNetworking and Content Delivery
Blueprint weight18%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Networking and Content Delivery for AWS SOA-C03. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 18% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Networking and Content Delivery

A company serves a public web application through an Amazon CloudFront distribution with an Application Load Balancer (ALB) origin. The operations team must add protection against common web exploits and abusive traffic, and they want malicious requests blocked as close to the edge as possible to reduce origin load and unnecessary data transfer. They also need an audit trail of blocked requests.

Which actions should the engineer AVOID when using AWS WAF with CloudFront? (Select THREE.)

Options:

  • A. Add a top-priority rule that allows all requests so other rules only log

  • B. Send all WAF logs to CloudWatch Logs with no retention policy at high request volume

  • C. Create a WAFv2 web ACL with REGIONAL scope and associate it to CloudFront

  • D. Use AWS Managed Rules and a rate-based rule to block common exploits and request floods

  • E. Associate a WAFv2 web ACL (CLOUDFRONT scope) with the CloudFront distribution

  • F. Deliver WAF logs to Amazon S3 through Kinesis Data Firehose and apply an S3 lifecycle policy

Correct answers: A, B and C

Explanation: To protect an edge-delivered application with CloudFront and AWS WAF, the web ACL must be in CLOUDFRONT scope and attached to the distribution so filtering occurs at the edge. Rule ordering matters because terminating actions stop evaluation. Logging should be enabled for auditability, but it needs cost controls (destination choice and retention/lifecycle).

The core idea is to attach AWS WAF to the CloudFront distribution (using a WAFv2 web ACL with CLOUDFRONT scope) so requests can be inspected and blocked at edge locations before they reach the origin. Within a web ACL, rules are evaluated by priority; a terminating Allow or Block stops further evaluation, so an allow-all rule placed first will bypass your protections.

For auditability, enable WAF logging, but treat logs as high-volume data. Prefer durable, cost-managed storage (for example, Kinesis Data Firehose to S3 with lifecycle policies) and only retain what you need for operations and investigations. The key takeaway is: correct scope + correct association + correct rule ordering, with logging that is intentionally cost-managed.

  • Correct association point attaching a CLOUDFRONT-scope web ACL to the distribution enforces protections at the edge.
  • Effective edge protections managed rule groups and rate-based rules are standard, high-level controls for exploits and abusive traffic.
  • Auditable and cost-aware logging delivering logs to S3 via Firehose with lifecycle policies keeps an audit trail without uncontrolled log-retention cost.

Question 2

Topic: Networking and Content Delivery

You are implementing hub-and-spoke routing across multiple VPCs by using an AWS Transit Gateway (TGW).

Select THREE statements that are true.

Options:

  • A. Spoke VPC route tables must include routes that target the TGW

  • B. A VPC attachment is associated with exactly one TGW route table

  • C. A single TGW attachment can be associated with multiple TGW route tables

  • D. All spoke VPCs must be in the same Availability Zone as the TGW

  • E. Creating a VPC attachment automatically adds routes to the spoke VPC

  • F. A TGW can be shared across AWS accounts by using AWS RAM

Correct answers: A, B and F

Explanation: Transit Gateway hub-and-spoke routing depends on attaching VPCs to the TGW, controlling paths with TGW route table associations/propagation, and adding the required routes in each VPC route table that point to the TGW. For multi-account environments, a TGW can be shared so other accounts can create their own VPC attachments.

At a high level, TGW acts as the hub and each VPC connects through a TGW VPC attachment. Routing between spokes is controlled by TGW route tables: you associate an attachment to a TGW route table to determine which table it uses for lookups, and you can enable propagation so routes from an attachment are learned by a TGW route table. TGW does not change your VPC route tables for you, so each spoke VPC still needs explicit routes that send traffic for other networks to the TGW.

Key operational steps are:

  • Create VPC attachments to the TGW (including cross-account via AWS RAM when needed)
  • Associate attachments to TGW route table(s) and configure route propagation
  • Add/verify VPC route table entries with TGW as the target

The common failure mode is assuming attachments alone create end-to-end routes without updating VPC and TGW route tables.

  • NO — Auto routes TGW attachments do not automatically insert routes into VPC route tables; you must add them.
  • NO — AZ requirement TGW is a Regional construct; spokes do not need to be in any specific AZ “with” the TGW.
  • NO — Multiple associations An attachment can be associated with only one TGW route table at a time (propagation is configured separately).
  • OK — Shared hub Sharing a TGW with AWS RAM is a standard way to connect multiple accounts into a hub-and-spoke.

Question 3

Topic: Networking and Content Delivery

A CloudFront distribution is serving a static website. Users report that some pages show old content after a deployment, and there are intermittent 5xx errors. You are asked which statements about using CloudFront logs, CloudWatch metrics, and cache invalidations are INCORRECT. (Select TWO.)

Options:

  • A. Standard access logs are insufficient; only real-time logs can troubleshoot delivery issues.

  • B. The CloudWatch CacheHitRate metric can help confirm increased origin fetches.

  • C. A cache invalidation removes objects from CloudFront edge caches, not from the origin.

  • D. x-edge-detailed-result-type in logs can help distinguish viewer vs origin-related failures.

  • E. CloudFront standard logs include x-edge-result-type to help identify cache behavior.

  • F. You must always create an invalidation after content changes, even with new versioned filenames.

Correct answers: A and F

Explanation: CloudFront standard access logs and CloudWatch metrics are often sufficient to diagnose cache misses, stale objects, and error patterns. Cache invalidations clear objects only from edge caches and are mainly needed when the URL path stays the same. When you deploy new versioned object keys, CloudFront treats them as new content and does not require invalidating old paths.

Troubleshooting CloudFront content delivery usually starts by correlating CloudWatch metrics with log evidence. Metrics such as CacheHitRate, 5xxErrorRate, and Requests show whether the distribution is serving from cache and whether errors are increasing. Standard CloudFront access logs then help you pinpoint what happened per request using fields like sc-status, x-edge-result-type, and x-edge-detailed-result-type (for example, whether an error was generated while contacting the origin or returned to the viewer).

Cache invalidations remove cached objects from CloudFront edge locations only; they do not delete or “purge” content from the origin. Invalidations are most useful when you update an object but keep the same URL path. If deployments use versioned filenames (new object keys), CloudFront will fetch the new objects on first request without needing to invalidate the previous paths.

Real-time logs can reduce troubleshooting latency, but they are not a prerequisite for diagnosing these issues.

  • Real-time logs required is incorrect because standard logs still provide request-by-request status and result-type fields useful for troubleshooting.
  • Always invalidate with versioned URLs is incorrect because changing the object key avoids stale cache for the new URL.
  • CacheHitRate usage is accurate for validating increased origin fetches when cache effectiveness drops.
  • Invalidations and log fields are accurate: invalidations clear edge caches only, and x-edge-* fields help classify cache and error behavior.

Question 4

Topic: Networking and Content Delivery

A company uses Route 53 with a public hosted zone for example.com. The record api.example.com is an alias to an internet-facing ALB. The same hostname is also used by Amazon EC2 instances in a VPC to reach the API.

Operations notices higher NAT gateway data processing charges and increased latency because VPC instances resolve api.example.com to the public ALB and egress through the NAT gateway. The API also has an internal ALB that serves the same API inside the VPC.

Which change MOST effectively optimizes cost and performance without changing the hostname used by clients?

Options:

  • A. Add a Route 53 Resolver inbound endpoint for the VPC

  • B. Use api.internal.example.com in a new private hosted zone

  • C. Create private hosted zone example.com and alias api to internal ALB

  • D. Point the public api record to the internal ALB

Best answer: C

Explanation: Use split-horizon DNS by adding a private hosted zone for the same domain and creating an internal api record that points to the internal ALB. VPC resolvers will prefer the private hosted zone, keeping traffic on private networking and avoiding NAT gateway egress charges and extra latency. External clients will continue to use the public hosted zone record.

The core optimization is split-horizon DNS with Route 53 hosted zones. When a VPC is associated with a private hosted zone, DNS queries from that VPC for names in the zone are answered from the private hosted zone instead of the public hosted zone.

Implement this by:

  • Creating a private hosted zone for example.com.
  • Associating it with the VPC (and any other VPCs that need internal resolution).
  • Creating api.example.com as an alias to the internal ALB.

This keeps internal API calls on private IP paths (reducing latency and NAT gateway data processing) while preserving the existing public DNS name for internet clients via the public hosted zone.

  • New internal hostname increases operational effort because clients must be updated to use a different FQDN.
  • Public record to internal ALB breaks internet access because an internal ALB is not reachable from the public internet.
  • Resolver inbound endpoint is for on-premises-to-VPC DNS resolution and does not change how EC2 instances in the VPC resolve api.example.com.

Question 5

Topic: Networking and Content Delivery

Select TWO statements that are true about configuring an AWS Site-to-Site VPN for hybrid connectivity to a VPC.

Options:

  • A. Each VPN connection provides two IPsec tunnels for redundancy.

  • B. A Site-to-Site VPN requires AWS Direct Connect to function.

  • C. You must create an S3 VPC endpoint before using the VPN.

  • D. You must attach a security group directly to the VGW.

  • E. VPN traffic is encrypted only inside AWS, not over the internet.

  • F. A VPN connection needs a Customer Gateway and a VGW/TGW.

Correct answers: A and F

Explanation: AWS Site-to-Site VPN is built by defining the on-premises endpoint (Customer Gateway) and connecting it to an AWS VPN endpoint (VGW on a VPC or a Transit Gateway). The VPN connection is created with two redundant IPsec tunnels, and routing is then established using either BGP (dynamic) or static routes, plus appropriate VPC route table entries.

At a high level, AWS Site-to-Site VPN provides encrypted IPsec connectivity between your on-premises network and AWS. You first represent your on-premises device as a Customer Gateway (typically using a public static IP), and you terminate the VPN on AWS using either a Virtual Private Gateway attached to a VPC or a Transit Gateway. Creating the VPN connection provisions two tunnels for redundancy; you then configure your on-premises device with the tunnel parameters and ensure routes are propagated/added so traffic can flow to the VPC CIDRs. Encryption applies end-to-end over the public internet between the on-premises device and AWS tunnel endpoints. Direct Connect and VPC endpoints are optional, separate connectivity features.

  • OK Customer Gateway + VGW/TGW is the required pairing for a VPN connection.
  • OK Two IPsec tunnels are provided per VPN connection for resiliency.
  • NO Direct Connect is a separate service and is not required for Site-to-Site VPN.
  • NO Security groups attach to ENIs/instances (not a VGW), and IPsec traffic is encrypted over the internet; VPC endpoints are unrelated prerequisites.

Question 6

Topic: Networking and Content Delivery

A company serves a static website from an Amazon S3 bucket through an Amazon CloudFront distribution. The security team requires CloudFront origin access control (OAC) and requires that users cannot access objects directly from S3 URLs.

Which TWO actions should an engineer AVOID when implementing this requirement? (Select TWO.)

Options:

  • A. Create an OAC and update the CloudFront origin settings to use that OAC

  • B. Enable S3 Block Public Access for the bucket and remove any public ACLs

  • C. Configure the CloudFront origin as the S3 static website endpoint and attach an OAC

  • D. Keep the S3 bucket publicly readable and rely on CloudFront signed URLs for access control

  • E. Add an S3 bucket policy that allows s3:GetObject only to the CloudFront service principal with a condition on the distribution ARN

  • F. Use the S3 REST API endpoint as the CloudFront origin instead of the S3 website endpoint

Correct answers: C and D

Explanation: To restrict direct access to an S3 origin with CloudFront OAC, the S3 bucket must not be public and must explicitly trust only the CloudFront distribution (via a bucket policy). Using an S3 website endpoint or leaving the bucket public defeats the control because users can bypass CloudFront and fetch objects directly from S3.

The core control with CloudFront OAC is: CloudFront signs origin requests with SigV4, and the S3 bucket policy is written to allow object access only when the request comes from the specific CloudFront distribution. That requires using the S3 REST endpoint as the origin and keeping the bucket non-public.

To meet the requirement:

  • Configure the distribution to use an OAC for the S3 origin.
  • Use an S3 bucket policy that grants s3:GetObject to cloudfront.amazonaws.com and scopes access with AWS:SourceArn to the distribution.
  • Keep S3 Block Public Access enabled and remove public ACLs/policies.

Approaches that require or preserve public S3 access (such as website endpoints or public-read buckets) allow bypassing CloudFront, so they violate the “no direct S3 URL access” requirement.

  • S3 website endpoint is incompatible with OAC and commonly requires public access, enabling direct origin access.
  • Public bucket with signed URLs still allows users to skip CloudFront and fetch objects from S3 directly.
  • Scoped bucket policy for CloudFront is the intended mechanism to restrict S3 access to a specific distribution.
  • Block Public Access + REST origin supports a private bucket model where CloudFront is the only path to objects.

Question 7

Topic: Networking and Content Delivery

A company has a public Route 53 hosted zone for example.com that points api.example.com to a public ALB. The company also created a private Route 53 hosted zone for example.com with api.example.com pointing to an internal NLB and associated the private hosted zone with a VPC.

Instances in that VPC resolve api.example.com to the internal NLB, while internet clients resolve it to the public ALB. Which Route 53 behavior most directly explains this outcome?

Options:

  • A. CloudTrail records DNS query responses and forces split-horizon resolution

  • B. Private hosted zone records associated with a VPC take precedence for queries from that VPC

  • C. Public hosted zone records always override private hosted zone records

  • D. VPC Flow Logs show DNS query names and cause Route 53 to return private answers

Best answer: B

Explanation: This is expected split-horizon DNS behavior with Route 53. When a private hosted zone is associated with a VPC and has the same domain name as a public hosted zone, Route 53 Resolver returns the private hosted zone record for queries that originate from that VPC. Clients outside the VPC use public DNS and receive the public hosted zone answer.

Route 53 supports split-horizon DNS by letting you create both a public and a private hosted zone for the same domain name. For DNS queries that originate within a VPC, the VPC’s Route 53 Resolver uses the private hosted zone that is associated with that VPC (if one exists) and returns those record sets. Queries coming from the internet do not use your VPC resolver, so they resolve the domain using the public hosted zone instead.

Key takeaway: if internal hosts “unexpectedly” resolve a name to an internal target, check for an associated private hosted zone with an overlapping name and record set.

  • Public overrides private is incorrect because VPC-associated private zones are preferred for VPC queries.
  • VPC Flow Logs don’t capture DNS query names (they are IP flow metadata), so they can’t explain the returned DNS answer.
  • CloudTrail and DNS answers is incorrect because CloudTrail logs API calls, not recursive DNS query/response traffic.

Question 8

Topic: Networking and Content Delivery

A CloudFront distribution is protected by an AWS WAF web ACL. To support an investigation, a CloudOps engineer enables AWS WAF logging to S3 through Kinesis Data Firehose and sets a logging filter to record only BLOCK actions.

During a 30-minute incident window, CloudWatch metrics for the web ACL show BlockedRequests = 24,000. The average AWS WAF log record size for this application is 1,200 bytes.

Using \(1\ \text{MiB} = 1{,}048{,}576\ \text{bytes}\), approximately how much log data (in MiB) will be delivered to S3 for that 30-minute window? Round to the nearest MiB.

Options:

  • A. 29 MiB

  • B. 549 MiB

  • C. 275 MiB

  • D. 27 MiB

Best answer: D

Explanation: Because logging is filtered to BLOCK actions only, the number of log records is driven by the BlockedRequests CloudWatch metric. Multiply blocked requests by the average log record size to get total bytes, then convert bytes to MiB using the provided conversion and round.

The core operational step is using AWS WAF CloudWatch metrics to quantify what will appear in AWS WAF logs when a logging filter is applied. With a filter that logs only BLOCK, the expected log volume is based on BlockedRequests, not total requests.

Calculation (30-minute window):

  • Log bytes -= blocked requests -× bytes per record -= \(24{,}000 \times 1{,}200 = 28{,}800{,}000\) bytes
  • Convert to MiB: \(28{,}800{,}000 / 1{,}048{,}576 \approx 27.45\) MiB
  • Round to nearest MiB: 27 MiB

Key takeaway: apply the logging filter scope first, then do the unit conversion.

  • Wrong unit base uses 1,000,000 bytes per MB instead of the provided MiB conversion.
  • Extra zero error misplaces a decimal during multiplication or conversion.
  • Logs all requests incorrectly assumes allowed requests are also logged despite the BLOCK-only filter.

Question 9

Topic: Networking and Content Delivery

A company runs a public HTTPS API behind an ALB in us-east-1 and another ALB in eu-west-1. Route 53 latency-based DNS with health checks is used for routing, but during a Regional outage some users continue to reach the unhealthy Region because of DNS caching. Several customers also require a small set of static IPs to allowlist. The API already exposes GET /health.

Which change is the best optimization to improve performance and availability without breaking these constraints?

Options:

  • A. Place Amazon CloudFront in front of the API and configure an origin group for failover

  • B. Use Route 53 weighted routing with multiple health checks per ALB

  • C. Create an AWS Global Accelerator with both ALBs as endpoints and configure HTTPS health checks to /health

  • D. Reduce the Route 53 record TTL to 10 seconds to speed up Regional failover

Best answer: C

Explanation: AWS Global Accelerator is designed to front multi-Region endpoints with two static anycast IP addresses and continuous health checking. By adding both ALBs as endpoints and pointing health checks at the existing /health path, traffic is automatically steered away from an unhealthy Region without relying on DNS TTL behavior. The main tradeoff is the additional cost for Global Accelerator.

The core issue is DNS-based failover: even with Route 53 health checks, clients and resolvers can continue using cached answers, so some users keep hitting the failed Region. AWS Global Accelerator solves this by advertising static anycast IPs from the AWS global network and routing each client to the closest healthy endpoint.

At a high level:

  • Create an accelerator and listener (port 443).
  • Create two endpoint groups (one per Region).
  • Add each Regional ALB as an endpoint.
  • Configure endpoint group health checks (HTTPS, port 443, path /health).

This improves availability (faster failover than DNS), performance (network edge entry and optimized routing), and operations (static IP allowlisting). The tradeoff is paying for Global Accelerator and its data processing/transfer charges.

  • Lower DNS TTL can reduce, but not eliminate, resolver/client caching and does not provide static IPs.
  • CloudFront origin failover can help for HTTP(S), but it doesn’t provide the same static anycast IP behavior and is not a direct replacement for GA’s endpoint-group health-based steering.
  • Weighted routing is still DNS-based, so it retains caching-driven failover delays and ALB IP allowlisting challenges.

Question 10

Topic: Networking and Content Delivery

When troubleshooting Amazon CloudFront content delivery problems (stale objects and elevated error responses), which TWO statements are true about using CloudFront logs, CloudWatch metrics, and cache invalidations? (Select TWO.)

Options:

  • A. A spike in CloudWatch 4xxErrorRate with no increase in 5xxErrorRate often points to viewer-side problems rather than origin failures.

  • B. Standard access logs can use sc-status with x-edge-result-type to help determine whether CloudFront served from cache or had to fetch from the origin.

  • C. The CloudWatch CacheHitRate metric cannot be used to assess cache effectiveness for a distribution.

  • D. An invalidation for /index.html deletes the object from the origin so CloudFront can fetch a fresh copy.

  • E. After you submit an invalidation, CloudFront continues to serve the old cached object until its TTL expires.

  • F. CloudFront standard access logs are delivered to CloudWatch Logs by default and are immediately queryable with Logs Insights.

Correct answers: A and B

Explanation: CloudFront standard access logs provide per-request evidence (status codes and cache result types) that helps isolate cache hits, misses, and origin fetch errors. CloudWatch metrics like 4xxErrorRate and 5xxErrorRate help quickly distinguish client-side error surges from origin/service failures. Cache invalidations remove objects from edge caches so the next request retrieves a fresh version from the origin.

To troubleshoot CloudFront, combine request-level data from standard access logs with distribution-level trends from CloudWatch metrics. In access logs, the HTTP status (sc-status) plus edge result fields (such as x-edge-result-type) help you see whether CloudFront responded from cache (hit) or had to contact the origin (miss) and whether the attempt failed. In CloudWatch, error-rate metrics help narrow the problem source: elevated 4xxErrorRate commonly indicates viewer/request issues, while increases in 5xxErrorRate more often correlate with origin or CloudFront-to-origin failures. When content is stale, a cache invalidation removes specified objects from CloudFront edge caches so subsequent requests fetch the latest object from the origin (it does not modify the origin).

  • OK (logs): Using sc-status with edge result fields helps separate cache hits/misses and origin fetch errors.
  • OK (metrics): 4xxErrorRate spikes without 5xxErrorRate increases usually indicate viewer-side/request problems.
  • NO (invalidation effect): Invalidations purge edge caches only; they never delete data from the origin.
  • NO (invalidation timing): Invalidations are meant to remove cached copies so the next request fetches a fresh object, not to wait for TTL expiry.

Continue with full practice

Use the AWS SOA-C03 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AWS SOA-C03 on Web View AWS SOA-C03 Practice Test

Free review resource

Read the AWS SOA-C03 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026