CompTIA Cloud+ CV0-004: Cloud Architecture

Try 10 focused CompTIA Cloud+ CV0-004 questions on Cloud Architecture, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try CompTIA Cloud+ CV0-004 on Web View full CompTIA Cloud+ CV0-004 practice page

Topic snapshot

FieldDetail
Exam routeCompTIA Cloud+ CV0-004
Topic areaCloud Architecture
Blueprint weight23%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Cloud Architecture for CompTIA Cloud+ CV0-004. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 23% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Cloud Architecture

A media company is moving an existing stateless containerized service to the cloud. The service creates thumbnails when video files are uploaded, takes 2–5 minutes per file, and does not analyze content or learn from historical data. The company wants elastic processing and durable file storage with minimal changes to the application. Which implementation best meets the requirement?

Options:

  • A. A managed machine learning training pipeline for uploaded videos

  • B. An IoT message broker with device shadows for each video

  • C. A GPU cluster for continuous model inference on uploads

  • D. Object storage events, a queue, and autoscaled container workers

Best answer: D

Explanation: The task needs ordinary cloud storage and elastic compute, not an evolving technology such as AI, machine learning, or IoT. Object storage, a queue, and autoscaled container workers match the upload-driven workflow while preserving the existing stateless container design.

Evolving technologies are valuable when the workload actually requires their capabilities. In this scenario, the service performs deterministic thumbnail creation, already runs in a container, and only needs durable storage plus elastic processing. Object storage can hold uploaded and generated files, storage events can enqueue work, and autoscaled container workers can process jobs as volume changes. This design is cloud-native without introducing unnecessary AI/ML or IoT components.

AI or machine learning services would fit content classification, prediction, or model training requirements. IoT services would fit device telemetry and command/control use cases. The key takeaway is to choose the simplest cloud capability that satisfies the requirement rather than forcing an emerging technology into a standard compute-and-storage task.

  • ML pipeline mismatch fails because the task does not require training, prediction, or content understanding.
  • IoT broker mismatch fails because uploaded media files are not device telemetry or managed IoT endpoints.
  • GPU inference overbuild fails because thumbnail generation does not require model inference or specialized AI acceleration.

Question 2

Topic: Cloud Architecture

A media company needs to process user-uploaded images by creating thumbnails immediately after each file is written to object storage. Upload volume is unpredictable, the processing task is stateless and completes in under 30 seconds, and the team wants to avoid managing operating systems, container hosts, or idle capacity. Which compute option should the cloud administrator implement?

Options:

  • A. VMs in an autoscaling group

  • B. A managed container cluster

  • C. Serverless functions triggered by storage events

  • D. A dedicated VM with scheduled jobs

Best answer: C

Explanation: Serverless compute is the best fit for short, stateless, event-driven workloads with unpredictable demand. It scales automatically, can scale to zero when idle, and removes most host and operating system management from the administrator.

The core decision is matching compute control and operational overhead to the workload. This thumbnail task is event-driven, brief, stateless, and has spiky demand, so a function triggered by object storage events provides the needed scale without provisioning persistent compute capacity. VMs provide the most OS-level control but require patching, capacity planning, and instance management. Containers are useful when packaging consistency and longer-running services are important, but they still introduce cluster or platform management compared with functions. The key takeaway is to choose serverless when the workload can be decomposed into short, stateless executions and the priority is operational simplicity with elastic scale.

  • VM autoscaling adds OS management and capacity decisions that the team wants to avoid.
  • Container cluster supports portable services but is more operationally involved than needed for a short event task.
  • Scheduled VM creates idle capacity and does not respond immediately to unpredictable upload events.

Question 3

Topic: Cloud Architecture

A cloud engineer is reviewing an incident for a customer portal. The relational database tier runs on IaaS VMs, and a minor engine patch caused one replica to fail to rejoin the cluster. Backups also required manual verification after the maintenance window. The next release must keep relational database features while reducing OS patching, backup scheduling, and HA maintenance. What is the best next fix?

Options:

  • A. Increase the VM size for each database node

  • B. Move the database into containers on the same VMs

  • C. Add more self-managed read replicas

  • D. Migrate to a provider-managed relational database service

Best answer: D

Explanation: The recurring problem is operational overhead from running a relational database on IaaS VMs. A provider-managed relational database service is the best fit because the provider handles much of the patching, backup automation, and availability management.

Provider-managed database deployments are appropriate when requirements emphasize reducing administrative effort while keeping database functionality. In this scenario, the team is spending time on VM-level and database-cluster operations: patch coordination, replica recovery, backup scheduling, and HA maintenance. A managed relational database service shifts much of that undifferentiated operational work to the cloud provider while still supporting relational data models and SQL-style application needs.

Scaling the VMs or adding replicas may improve capacity or read performance, but it does not remove the team’s responsibility for operating the database platform. Containerizing the database can add portability, but it usually increases operational complexity unless paired with a managed database platform.

  • Bigger VMs may help performance, but they do not reduce patching, backup, or HA administration.
  • Database containers do not automatically solve operational ownership and can complicate persistent data management.
  • More replicas can improve availability or read scaling, but they still require self-managed cluster operations.

Question 4

Topic: Cloud Architecture

A company wants to reduce compute costs without changing application architecture. The web tier has a committed baseline of eight VM instances running 24/7 for the next 12 months. A nightly image-processing job is stateless, checkpoints progress, and can tolerate interruption and retry. There are no software licensing or hardware isolation requirements. Which TWO billing choices best fit these workload requirements? Select TWO.

Options:

  • A. Use pay-as-you-go for the web tier baseline

  • B. Use reserved resources for the web tier baseline

  • C. Use dedicated hosts for the web tier baseline

  • D. Use spot instances for the image-processing job

  • E. Use reserved resources for the image-processing job

  • F. Use spot instances for the web tier baseline

Correct answers: B and D

Explanation: Reserved resources are best for steady, predictable capacity that will run for a committed term. Spot instances are best for fault-tolerant, interruptible workloads such as retryable batch jobs. The stem explicitly removes the main reason to choose dedicated hosts: licensing or physical isolation.

Cloud compute billing models should match workload predictability and interruption tolerance. A 24/7 baseline for the next 12 months is a strong fit for reserved resources because the organization can commit to capacity in exchange for a lower effective rate. The nightly image-processing job is stateless, checkpoints progress, and can retry, so it can use spot capacity even if instances are reclaimed. Dedicated hosts are mainly justified by licensing, compliance, or physical host isolation needs, which the scenario says do not apply. Pay-as-you-go remains useful for uncertain or bursty demand, but it is usually not the lowest-cost choice for a known baseline.

  • Dedicated host mismatch fails because the scenario states there is no licensing or hardware isolation requirement.
  • Pay-as-you-go baseline provides flexibility but misses the cost benefit of committing to predictable 24/7 capacity.
  • Reserved batch capacity overcommits to a workload that can use cheaper interruptible capacity.
  • Spot web baseline risks interruption for a required always-on baseline tier.

Question 5

Topic: Cloud Architecture

A document-processing workload runs several components: file ingestion, OCR, metadata extraction, fraud checks, and archival storage. The components currently call each other directly over HTTP. During peak uploads, logs show intermittent 429 Too Many Requests responses from the fraud-check API, duplicate OCR jobs, and records archived before all metadata fields are populated. Which next fix best addresses the root cause?

Options:

  • A. Implement a workflow state machine with retries and dependency control

  • B. Increase the CPU size of the OCR compute nodes

  • C. Move archival storage to a colder storage tier

  • D. Place a CDN in front of the ingestion endpoint

Best answer: A

Explanation: The symptoms point to weak coordination between dependent components, not a single compute or storage bottleneck. A workflow state machine can enforce step order, track state, and apply controlled retries/backoff when an API throttles.

Workflow orchestration is appropriate when a workload requires coordinated execution across multiple services or components. In this case, direct HTTP chaining leaves each component responsible for retry behavior and state awareness, which can cause duplicate work and out-of-order completion when one dependency throttles. A state machine or workflow engine can model the required sequence, wait for dependencies, retry failed steps with backoff, and prevent archival until all required metadata is complete.

The key takeaway is that multi-step dependency failures are best addressed with orchestration, not by tuning an unrelated resource in one component.

  • Compute scaling may help OCR throughput, but it does not prevent duplicate jobs or enforce workflow order.
  • CDN placement can improve content delivery or edge caching, but it does not coordinate backend processing steps.
  • Colder storage may reduce archival cost, but it does not address premature archival or API throttling behavior.

Question 6

Topic: Cloud Architecture

A team migrates a self-managed database from an IaaS VM to a cloud-provided managed relational database service. During the next deployment, the pipeline fails at the step that SSHs to the database host to install OS security updates. The database endpoint is reachable on the database port, and the service console only exposes maintenance window and engine version settings. What is the best next fix?

Options:

  • A. Open TCP port 22 to the pipeline runner

  • B. Remove the SSH patch step and use managed maintenance controls

  • C. Deploy a bastion host in the database subnet

  • D. Replace the database DNS name with a static IP

Best answer: B

Explanation: The failure is expected after moving from self-managed IaaS to a managed database service. Managed services reduce customer operational responsibility by removing direct host access and shifting underlying OS maintenance to the provider.

Cloud-provided managed services reduce the customer’s operational burden for supported workloads by abstracting infrastructure tasks such as host provisioning, OS patching, and platform maintenance. In this scenario, the deployment pipeline still assumes the customer owns the database host, but the managed service exposes only service-level controls such as maintenance windows and engine versions. The appropriate fix is to update the deployment process to use those managed service controls instead of attempting SSH-based OS administration.

Opening SSH access or adding a bastion host treats the managed database like an IaaS VM, which conflicts with the service model. The key takeaway is that automation must match the responsibility boundary of the selected cloud service model.

  • SSH access fails because managed database nodes typically do not expose host login for customer patching.
  • Bastion access is not useful when the provider does not allow administrative access to the underlying hosts.
  • Static IP use does not address the failed OS patch step and may reduce resilience if the service endpoint changes.

Question 7

Topic: Cloud Architecture

A company is designing storage for a private cloud virtualization cluster. Database VMs need low-latency block access with redundant paths. Application VMs need shared VM datastores over the existing Ethernet network, and moderate latency is acceptable. Both VM classes must support live migration and restart on another host after a host failure.

Which TWO storage approaches should the architect choose?

Options:

  • A. Host affinity rules for database VMs

  • B. Object storage buckets for VM boot disks

  • C. Local NVMe disks in each hypervisor host

  • D. NAS datastores mounted by all hypervisor hosts

  • E. SAN LUNs presented to all hypervisor hosts

  • F. Local SSDs with nightly VM image backups

Correct answers: D and E

Explanation: The decisive requirement is shared storage access by multiple hypervisor hosts. SAN fits low-latency block storage needs, while NAS fits shared file-based VM datastore needs over Ethernet. Local storage may be fast, but it does not inherently support live migration or restart on another host.

Virtualization clusters commonly require shared storage so another host can access the VM’s virtual disks during live migration or after a host failure. A SAN presents block storage, usually with redundant paths, and is well suited for latency-sensitive database VMs. A NAS presents file-based shares over an IP network and can be a practical shared datastore for application VMs when moderate latency is acceptable. Local disks can provide high performance, but they tie VM data to one host unless an additional distributed storage layer is introduced, which is not stated in the requirement.

The key takeaway is to match SAN to shared block access and NAS to shared file access when VM mobility and host-level availability are required.

  • Local disk speed is tempting for databases, but local NVMe does not meet the multi-host shared access requirement.
  • Backup-only protection helps recovery, but nightly backups do not enable live migration or immediate restart from another host.
  • Object storage is not a typical hypervisor datastore for VM boot disks requiring block or file semantics.
  • Host affinity controls placement, but it does not provide shared storage access or storage availability.

Question 8

Topic: Cloud Architecture

A company is migrating an internal application into a new private VPC. The new VPC is peered with a shared-services VPC, and the shared-services VPC has the existing site-to-site VPN to the data center. Instances in the new VPC can reach systems in the shared-services VPC, but cannot reach on-premises hosts. Security groups and network ACLs allow the required ports, and application logs show no errors.

Which change should the cloud engineer make to restore reachability?

Options:

  • A. Add the on-premises CIDR to the peering route

  • B. Move the application instances to a public subnet

  • C. Use a transit gateway for the VPCs and VPN routes

  • D. Change the application load balancer health check path

Best answer: C

Explanation: This is a cloud network reachability problem, not an application behavior issue. VPC peering is generally non-transitive, so the new VPC cannot use the shared-services VPC as a router to the site-to-site VPN. A transit gateway is the appropriate hub for routing between multiple VPCs and hybrid connectivity.

The key concept is transitive routing in cloud networking. Peering commonly allows direct routing only between the two peered networks; it does not let one peered VPC forward traffic through another VPC to a VPN or third network. Because the new VPC can reach the shared-services VPC but not on-premises hosts, and security/application evidence is clean, the missing capability is a valid routed path across VPCs and the VPN. A transit gateway provides a central routing domain where VPC attachments and the VPN attachment can exchange routes according to configured route tables.

Changing subnets or application settings would not create the required network path.

  • Public subnet does not solve private hybrid routing and may weaken the intended internal-only design.
  • Peering route only fails because the desired path depends on transitive routing through another VPC to a VPN.
  • Health check change targets application availability, but the symptoms point to network reachability.

Question 9

Topic: Cloud Architecture

A company is moving a customer portal to the public cloud. The application requires a relational SQL database, but the operations team wants to minimize database server patching, backup administration, and high-availability configuration. Which database deployment best fits these requirements?

Options:

  • A. Containerized database on a worker cluster

  • B. Provider-managed relational database service

  • C. Self-managed database on IaaS VMs

  • D. Object storage with lifecycle policies

Best answer: B

Explanation: A managed relational database service is the best architectural fit when the application needs SQL and the requirement emphasizes reduced operations work. The provider handles much of the platform maintenance, such as patching options, backup features, and availability mechanisms.

The core concept is choosing the database deployment model that aligns with operational responsibility. A provider-managed relational database, often delivered as DBaaS or PaaS, keeps SQL capabilities while shifting much of the database platform administration to the cloud provider. The customer still manages schema design, access, data protection choices, and application connectivity, but does not need to maintain database hosts like a traditional IaaS deployment.

This is different from running a database on VMs or in containers, where the team remains responsible for more OS, runtime, patching, backup, and HA design work. Object storage is not a relational database engine.

  • IaaS database leaves the team responsible for much more server and database administration.
  • Containerized database can be portable, but it does not minimize operational ownership by itself.
  • Object storage is useful for unstructured data, not for a relational SQL database workload.

Question 10

Topic: Cloud Architecture

A company migrated a critical VM to an on-premises private cloud. During hypervisor patching, the physical host rebooted and the VM stayed offline until an administrator manually powered it on after the host returned. The management console shows the host is configured as stand-alone, and the VM uses only local host storage. The requirement is to survive a single host failure with centralized management. What is the best root cause or next fix?

Options:

  • A. Add a NAT gateway for outbound connectivity

  • B. Move the VM to a hypervisor cluster with shared or replicated storage

  • C. Configure a guest OS firewall exception

  • D. Increase the VM CPU and memory reservations

Best answer: B

Explanation: The outage occurred because the VM was running on a stand-alone virtualization host with local storage. Stand-alone virtualization can run VMs, but it does not provide cluster-level high availability or automated restart on another host when the physical host fails or reboots.

The core concept is the difference between stand-alone virtualization and clustering. A stand-alone hypervisor is managed independently and typically ties VM availability to that one physical host, especially when the VM disks are on local storage. A cluster groups multiple hosts under centralized management and can support HA functions such as automated VM restart or failover when a host becomes unavailable, assuming storage and networking are designed to allow another host to run the workload. The stated requirement is not simply to run a VM; it is to survive a single host failure. That requires a clustered virtualization design, usually with shared or replicated storage rather than host-local-only disks. Resource tuning or network changes would not address the missing availability mechanism.

  • Resource reservations may improve performance predictability, but they do not let a VM restart on another physical host.
  • Guest firewall changes affect traffic allowed by the VM OS, not hypervisor host failure recovery.
  • Outbound NAT solves external connectivity issues, not high availability for a VM during host maintenance.

Continue with full practice

Use the CompTIA Cloud+ CV0-004 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try CompTIA Cloud+ CV0-004 on Web View CompTIA Cloud+ CV0-004 Practice Test

Free review resource

Read the CompTIA Cloud+ CV0-004 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026