CompTIA Cloud+ CV0-004 Practice Test

Prepare for CompTIA Cloud+ (CV0-004) with free sample questions, a full-length diagnostic, topic drills, timed practice, cloud architecture, deployment, operations, security, DevOps fundamentals, troubleshooting scenarios, and detailed explanations in IT Mastery.

CompTIA Cloud+ (CV0-004) is CompTIA’s vendor-neutral cloud-infrastructure route for candidates who work across cloud architecture, deployment, operations, security, DevOps fundamentals, and troubleshooting.

IT Mastery practice for Cloud+ CV0-004 is live now. Use this page to start the web simulator, review the exam snapshot, work through 24 public sample questions, and continue into full IT Mastery practice with the same IT Mastery account on web, iOS, iPadOS, macOS, or Android.

Interactive Practice Center

Start a practice session for CompTIA Cloud+ (CV0-004) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.

Prefer to practice on your phone or tablet? Download the IT Mastery – AWS, Azure, GCP & CompTIA exam prep app for iOS or IT Mastery app on Google Play (Android) and use the same IT Mastery account across web and mobile.

Free diagnostic: Try the 90-question CompTIA Cloud+ full-length practice exam before subscribing. Use it to separate misses around architecture, deployment, operations, security, DevOps fundamentals, and cloud troubleshooting.

What this CV0-004 practice page gives you

  • a direct route into IT Mastery practice for CompTIA Cloud+
  • 24 on-page sample questions with detailed explanations
  • topic drills and mixed sets across cloud architecture, deployment, operations, security, DevOps fundamentals, and troubleshooting
  • a clear free-preview path before you subscribe
  • the same IT Mastery account across web and mobile

Who CV0-004 is for

  • cloud administrators, systems administrators, and cloud engineers who need vendor-neutral cloud operations coverage
  • candidates comparing broad cloud operations with AWS, Azure, GCP, Terraform, or Security+ routes
  • teams that need multi-cloud language around architecture, deployment, operations, security, DevOps, and troubleshooting

CV0-004 exam snapshot

  • Vendor: CompTIA
  • Official exam name: CompTIA Cloud+ (V4)
  • Exam code: CV0-004
  • Launch date shown by CompTIA: September 24, 2024
  • Question count shown by CompTIA: maximum of 90, including multiple-choice and performance-based questions
  • Exam time shown by CompTIA: 90 minutes
  • Passing score shown by CompTIA: 750 on a 100-900 scale
  • Recommended experience shown by CompTIA: 2-3 years of hands-on experience as a systems administrator or cloud engineer
  • Current IT Mastery status: live practice available

Cloud+ questions usually reward the option that keeps cloud infrastructure secure, recoverable, observable, automated, and operationally realistic across more than one platform.

Topic coverage for CV0-004

DomainWeight
Cloud architecture23%
Deployment19%
Operations17%
Security19%
DevOps fundamentals10%
Troubleshooting12%

CV0-004 cloud-decision filters

Cloud+ questions usually reward operationally realistic choices that work across vendor platforms.

Scenario signalFirst checkStrong answer usually…Weak answer usually…
A workload must be resilientAvailability and recovery objectiveChooses redundancy, backups, replication, failover, and testing that match RTO/RPOAdds capacity without recovery design
Costs increase unexpectedlyUtilization and commitment modelChecks rightsizing, autoscaling, storage tier, reserved/spot fit, and idle resourcesTurns off monitoring or removes redundancy blindly
Deployment fails repeatedlyPipeline, image, config, and dependency pathChecks IaC/templates, versioning, permissions, dependencies, and rollbackManually changes production to get unstuck
A cloud account is over-permissionedIAM and least privilegeApplies scoped roles, MFA, secrets control, and audit loggingShares admin credentials temporarily
Observability is weakLogs, metrics, traces, and alertsDefines signals and thresholds tied to service healthWaits for users to report outages
A multi-cloud choice is proposedPortability and operational burdenBalances resilience, compliance, skills, cost, tooling, and data movementChooses multi-cloud because it sounds safer

CV0-004 readiness map

DomainWhat the exam testsWhat IT Mastery practice should forceCommon trap
Cloud architectureWhether you can design for availability, scalability, resiliency, and costTie architecture to workload requirementsChoosing maximum redundancy without trade-off
DeploymentWhether builds, templates, images, CI/CD, and rollout controls are understoodPrefer repeatable, versioned, rollback-ready deploymentMaking manual production fixes
OperationsWhether monitoring, backup, capacity, lifecycle, and continuity are practicalConnect operations to service reliabilityTreating cloud as self-managing
SecurityWhether IAM, data protection, secrets, network controls, and compliance are appliedEnforce least privilege and evidenceOver-permissioning for convenience
DevOps fundamentalsWhether automation supports safe deliveryUse automation with validation and controlAutomating an unsafe process
TroubleshootingWhether cloud symptoms are isolated across app, network, IAM, and platform layersDiagnose dependencies before remediationBlaming the provider first

How to use the CV0-004 simulator efficiently

  1. Start with architecture and deployment drills so you can separate vendor-neutral design principles from one-provider memorization.
  2. Review every miss until you can explain why the best answer fits security, availability, automation, cost, observability, or troubleshooting constraints.
  3. Move into mixed sets once you can switch between operations, security, DevOps, and troubleshooting without losing the sequence of work.
  4. Finish with timed runs so the 90-minute pace feels normal before test day.

Final 7-day CV0-004 practice sequence

TimingPractice focusWhat to review after the set
Days 7-5One 90-question diagnostic plus drills in weak Cloud+ domainsWhether misses came from architecture, deployment, operations, security, DevOps, or troubleshooting
Days 4-3Mixed cloud operations and troubleshooting scenariosWhether you can explain the reliability, security, cost, or automation trade-off
Days 2-1Light review of IAM, backup/DR, monitoring, autoscaling, IaC, deployment rollback, and shared responsibilityOnly recurring traps; avoid diving into one vendor’s proprietary details
Exam dayShort warm-up if usefulChoose the vendor-neutral cloud action that is secure, observable, recoverable, and practical

When CV0-004 practice is enough

If you can score above 75% on several unseen mixed attempts and explain the cloud trade-off behind each miss, you are likely ready. Repeating familiar cloud scenarios can overtrain pattern recognition; the useful skill is deciding from requirements, constraints, and operational evidence.

Focused sample questions

Use these child pages when you want focused IT Mastery practice before returning to mixed sets and timed mocks.

Free study resources

Need concept review first? Read the CompTIA Cloud+ CV0-004 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks, topic drills, and full IT Mastery practice.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the question style and explanation depth.
  • Premium: the full CV0-004 practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Good next pages after Cloud+

Official sources

24 CV0-004 sample questions with detailed explanations

These are original IT Mastery practice questions aligned to Cloud+ architecture, deployment, security, operations, troubleshooting, and cloud service-management decisions. They are not CompTIA exam questions and are not copied from any exam sponsor. Use them to check readiness here, then continue in IT Mastery with mixed sets, topic drills, and timed mocks.

Question 1

Topic: Domain 2: Deployment

A company is preparing runbooks for two deployment migrations:

  • An inventory application runs on virtual machines in the company data center and must be moved to IaaS virtual machines in a public cloud VPC.
  • A static web application uses object storage and a CDN in Public Cloud A and must be moved to equivalent services in Public Cloud B.

Which TWO migration paths match the source and target environments? (Select TWO.)

  • A. On-premises-to-cloud for the inventory application
  • B. Cloud-to-cloud for the static web application
  • C. Cloud-to-on-premises for the inventory application
  • D. On-premises-to-cloud for the static web application
  • E. Cloud-to-on-premises for the static web application
  • F. Cloud-to-cloud for the inventory application

Best answer: A

Explanation: Migration path selection is based on where the workload starts and where it is going. The VM-based inventory application moves from an on-premises data center to public cloud IaaS, while the static web application moves from one cloud provider to another. The core concept is matching the migration path to the source and target environments. A workload that begins in a private data center and is moved into public cloud infrastructure is an on-premises-to-cloud migration. A workload that already runs in one cloud environment and is moved to another cloud environment is a cloud-to-cloud migration. The service type, such as VM, object storage, or CDN, helps describe the implementation, but it does not change the basic migration path. The closest traps reverse the direction or misidentify the source environment.


Question 2

Topic: Domain 5: DevOps Fundamentals

A team deploys a containerized web application to test, staging, and production. Recent releases failed because engineers ran different build commands and manually changed environment settings before each deployment. Which approach BEST uses automation to improve CI/CD pipeline consistency and repeatability?

  • A. Require a senior engineer to run each deployment manually
  • B. Create separate deployment scripts for each engineer
  • C. Store pipeline and IaC definitions in version control
  • D. Deploy directly to production after image builds complete

Best answer: C

Explanation: Automation improves CI/CD consistency by codifying the steps that build, test, provision, and deploy an application. Keeping pipeline and IaC definitions in version control makes those steps repeatable, reviewable, and reusable across environments. The core concept is pipeline automation as code. When CI/CD stages and infrastructure configuration are stored in version control, each run uses the same defined process instead of relying on memory or local engineer habits. This reduces configuration drift, makes changes auditable, and supports repeatable deployments to test, staging, and production. It also allows teams to roll back or review pipeline changes like application code. Manual deployment steps may still require approvals, but they should not be the main mechanism for ensuring repeatability. The key takeaway is that automation makes the deployment process deterministic and consistent across runs.


Question 3

Topic: Domain 1: Cloud Architecture

A team is containerizing a document-processing application. The application writes uploaded files to /app/uploads, and those files must remain available if a container is restarted, rescheduled to another host in the cluster, or replaced during an image update. Which implementation best meets this requirement?

  • A. Mount a persistent volume at /app/uploads
  • B. Store uploads in the container writable layer
  • C. Increase the container restart policy retries
  • D. Bake the uploads directory into the image

Best answer: A

Explanation: Container writable layers are ephemeral and tied to a specific container instance. Mounting a persistent volume at the upload path keeps application data outside the container, allowing it to survive restarts, replacements, and rescheduling. The core concept is separating persistent application data from the container lifecycle. Containers are designed to be replaceable, so data written only inside the container’s writable layer can be lost when the container is removed or recreated. A persistent volume provides durable storage that can be mounted into the container at the path the application already uses, such as /app/uploads. This allows new or restarted container instances to access the same data without changing the image. Restart policies can improve availability, but they do not make container-local data durable. The key takeaway is to use persistent volumes for state that must survive container replacement.


Question 4

Topic: Domain 3: Operations

A cloud operations team is preparing to update the base image for a group of production VMs that host a customer-facing application. The service has strict uptime requirements, and management wants confidence that the change can be reversed quickly if errors appear after rollout. Which action best manages this lifecycle change before production deployment?

  • A. Apply the image during the next maintenance window
  • B. Increase alert thresholds during the image rollout
  • C. Validate the image in a production-like environment and test rollback
  • D. Extend backup retention before updating the VMs

Best answer: C

Explanation: Lifecycle changes such as base image updates should be tested before production when uptime and rollback confidence are required. A production-like validation environment plus an explicit rollback test confirms both the change behavior and the recovery path. The core concept is pre-production lifecycle validation. For an operational change that can affect availability, the team should use a staging or test environment that closely matches production, run functional or smoke tests, and verify that rollback to the prior image or snapshot works within expectations. This reduces the chance of discovering compatibility, performance, or recovery issues only after users are impacted. A maintenance window, monitoring change, or longer backup retention can support operations, but none of them proves the updated image is safe or that rollback works quickly. The key takeaway is to test both the change and the recovery mechanism before production rollout.


Question 5

Topic: Domain 6: Troubleshooting

A cloud team reruns an IaC deployment artifact to recreate a staging environment after decommissioning old resources. The pipeline fails during validation with: Unsupported component definition: load_balancer.backendPool is deprecated; use targetGroups with schemaVersion 2024-02 or later. The artifact has not been updated in over a year. Which action best resolves the deployment issue while preserving repeatability?

  • A. Manually create the load balancer in the cloud console
  • B. Increase the pipeline timeout and retry the deployment
  • C. Disable validation checks for the staging environment
  • D. Update the artifact schema and component fields, then redeploy

Best answer: D

Explanation: The validation error identifies an outdated deployment artifact, not a transient operations issue. Updating the IaC schema and replacing deprecated fields preserves repeatability and aligns the artifact with the currently supported cloud deployment API. Deployment artifacts such as IaC templates, modules, and configuration files must be maintained as provider-neutral schemas, APIs, and component definitions evolve. In this scenario, validation fails before provisioning because the artifact still uses a deprecated load balancer field. The best operational response is to update the artifact to the supported schema, replace the deprecated component definition, validate it, and redeploy through the pipeline. This keeps the environment reproducible and avoids configuration drift. Manual console changes may work temporarily, but they bypass the source-controlled artifact and can cause future deployments to fail again.


Question 6

Topic: Domain 4: Security

A healthcare company stores patient records in a cloud file repository. Compliance requires the company to prevent users from emailing, downloading, or sharing files that contain regulated patient identifiers outside approved domains. Which security control should the cloud administrator implement?

  • A. Network intrusion prevention system
  • B. Data loss prevention policy
  • C. Web application firewall
  • D. DDoS protection service

Best answer: B

Explanation: Data loss prevention is the right control when the requirement is to stop sensitive information from leaving approved locations or channels. In this scenario, the key need is preventing unauthorized disclosure or movement of patient identifiers, not blocking attacks against a network or application endpoint. DLP focuses on identifying sensitive data, such as regulated identifiers, and enforcing policies that prevent unauthorized transmission, sharing, copying, or downloading. In cloud environments, DLP can inspect content and metadata across SaaS, storage, endpoints, email, and collaboration tools, then allow, quarantine, encrypt, alert, or block based on policy. This directly matches a compliance requirement to prevent patient records from being shared outside approved domains. Network and application security controls may reduce attack exposure, but they do not primarily classify sensitive content and stop data exfiltration through user actions.


Question 7

Topic: Domain 2: Deployment

A company needs to move an aging three-tier business application from an on-premises virtualization cluster to the cloud before the data center lease expires. The application uses fixed OS versions, local configuration files, and a self-managed database. The business requires the fastest migration with minimal application code or database schema changes. Which migration approach best fits these requirements?

  • A. Rehost the application on cloud VMs
  • B. Refactor the application into microservices
  • C. Replace the application with SaaS
  • D. Replatform the database to a managed service

Best answer: A

Explanation: Rehost, often called lift and shift, is the best fit when the priority is moving quickly with minimal application change. It keeps the workload architecture largely intact while placing it on cloud infrastructure such as virtual machines. The core concept is matching the migration strategy to the amount of change the workload can tolerate. In this scenario, the application depends on fixed OS versions, local configuration files, and a self-managed database, and the business wants the fastest move before a lease deadline. Rehosting preserves the existing application stack and moves it to cloud IaaS with little or no code or schema change. It may not fully optimize cost or cloud-native capabilities immediately, but it reduces migration risk and time compared with redesigning or replacing the application. Refactoring or replacing can provide long-term benefits, but those approaches require more application change and planning than the stated requirement allows.


Question 8

Topic: Domain 5: DevOps Fundamentals

A company is migrating an order platform from a private cloud monolith to public cloud microservices. During the transition, order status updates must be delivered asynchronously, services must remain loosely coupled, and cloud services must not directly access the monolith’s database. Which proposed integration pattern should the team reject?

  • A. Publish order events to a message broker
  • B. Use a queue with retry and dead-letter handling
  • C. Use a change publisher to emit domain events
  • D. Connect microservices directly to the monolith database

Best answer: D

Explanation: The rejected pattern is direct database integration from the new microservices to the monolith. It conflicts with the requirements for asynchronous communication, loose coupling, and no direct database access. System integration patterns must match the coupling, communication, and data ownership requirements. In this migration, the cloud microservices should receive order status changes through an asynchronous interface, such as events or queues, rather than depending on the monolith’s internal database schema. Direct database access couples services to tables, permissions, and schema changes owned by another application, making migration and versioning harder. The key takeaway is to integrate through owned interfaces or events when loose coupling is required.


Question 9

Topic: Domain 1: Cloud Architecture

A company is migrating a customer portal to a public cloud. The application servers will run 24x7, have predictable utilization, and are expected to remain at the same capacity for at least three years. The workload does not require physical host isolation or bring-your-own-license placement, and instances must not be interrupted. Which billing model should the cloud engineer recommend?

  • A. Dedicated host
  • B. Reserved resource
  • C. Pay-as-you-go
  • D. Spot instance

Best answer: B

Explanation: Reserved resources are best for predictable, continuously running workloads with a known capacity commitment. The stem rules out host isolation and interruption tolerance, making a reservation more cost-effective than pay-as-you-go while avoiding spot-instance risk. The core concept is matching the billing model to workload predictability and constraints. A reserved resource provides discounted pricing in exchange for committing to a term or capacity level, which fits servers running 24x7 for several years. Dedicated hosts are mainly justified when the organization needs physical host isolation, specific licensing placement, or compliance-driven tenancy controls. Pay-as-you-go is flexible but usually costs more for stable long-term use. Spot instances can be cheaper, but they are interruptible and unsuitable when the application must remain continuously available.


Question 10

Topic: Domain 3: Operations

A company is migrating a customer order application to a public cloud. The database must lose no more than 15 minutes of committed transactions, and service must resume within 1 hour after a regional outage. The business also wants to avoid paying for a fully active duplicate environment. Which backup and recovery approach BEST meets these requirements?

  • A. Nightly full backups to archive storage
  • B. Continuous replication to a warm standby environment
  • C. Hourly snapshots retained in the primary region
  • D. Manual export of database dumps before releases

Best answer: B

Explanation: RPO defines acceptable data loss, while RTO defines acceptable recovery time. Continuous replication to a warm standby is the best fit because it limits data loss to minutes and enables faster recovery than restoring backups from scratch. The core concept is matching recovery design to RTO and RPO. A 15-minute RPO requires frequent or continuous protection of committed transactions; nightly or manual backups would allow too much data loss. A 1-hour RTO requires a recovery target that is already partially prepared, such as a warm standby with replicated data and deployable application components. This avoids the cost of a fully active duplicate site while still recovering faster than a cold restore. The key takeaway is that tighter RPO and RTO targets usually require replication and pre-provisioned recovery capacity, not only periodic backups.


Question 11

Topic: Domain 6: Troubleshooting

A cloud engineer is reviewing a security ticket for a PaaS database console. The user successfully signs in through SSO with MFA, but an attempt to export a backup returns 403 AccessDenied. Audit logs show the user’s role lacks the database.backupExport permission, and no backup was created. Which security issue classification BEST matches these symptoms?

  • A. Authentication failure
  • B. Privilege escalation
  • C. Authorization failure
  • D. Unauthorized access

Best answer: C

Explanation: This is an authorization failure because the user authenticated successfully but was denied a specific permission. The 403 AccessDenied result and missing role permission point to an access control decision, not a sign-in problem or a successful breach. Authentication verifies identity, while authorization determines what an authenticated identity can do. In this scenario, SSO and MFA succeeded, so the cloud platform knows who the user is. The failed backup export occurs because the assigned role does not include the required permission, and the audit log shows the action was blocked. A 403 AccessDenied response commonly indicates that the request was understood but not allowed by policy. The key takeaway is to separate identity proof from permission enforcement when interpreting cloud security symptoms.


Question 12

Topic: Domain 4: Security

A security engineer reviews firewall and security group rules for a public web application. The architecture standard states that only the load balancer should accept internet traffic, application instances must be administered through a bastion subnet, and the database must remain private. Which finding is an unnecessary open port exposure requiring review or remediation?

  • A. TCP 443 from the internet to the load balancer
  • B. TCP 22 from the internet to application instances
  • C. TCP 22 from the bastion subnet to application instances
  • D. TCP 5432 from the application subnet to the database

Best answer: B

Explanation: The exposed administrative port is the key security finding. Because the standard requires administration through a bastion subnet, allowing SSH directly from the internet to application instances is unnecessary and should be reviewed or remediated. Unnecessary open ports increase the attack surface and often violate hardening baselines, CIS-style benchmarks, and Zero Trust principles. In this scenario, public access should terminate only at the load balancer. Administrative access to application instances should be restricted to the bastion subnet, and database access should be limited to the application subnet. Internet-facing SSH on the application tier creates avoidable exposure for brute-force attempts, credential attacks, and unauthorized management access. The key takeaway is to compare open ports against the approved network path and remove or restrict anything not required for the workload.


Question 13

Topic: Domain 2: Deployment

A cloud engineer is provisioning a worker pool by using IaC variables. The requirements are: exactly three VMs, one per availability zone; at least 24 total vCPU and 96 GiB total RAM; estimated compute cost no more than 1,000 USD per month; and no unnecessary overprovisioning.

Available sizes:

  • medium: 4 vCPU, 16 GiB RAM, 160 USD/month
  • large: 8 vCPU, 32 GiB RAM, 300 USD/month
  • xlarge: 16 vCPU, 64 GiB RAM, 560 USD/month

Which variable set should be used?

  • A. instance_count = 3, size = medium
  • B. instance_count = 3, size = xlarge
  • C. instance_count = 6, size = medium
  • D. instance_count = 3, size = large

Best answer: D

Explanation: The large size is the smallest VM size that satisfies the total CPU and memory requirements with exactly three instances. It also stays within the monthly cost limit, avoiding both underprovisioning and unnecessary overprovisioning. Provisioning from requirements means matching the stated capacity, placement, and cost constraints without adding excess resources. With exactly three VMs required, each VM must contribute enough capacity so the total reaches at least 24 vCPU and 96 GiB RAM. Three large VMs provide 8 vCPU and 32 GiB each, totaling 24 vCPU and 96 GiB, and the monthly cost is 3 × 300 = 900 USD. That meets the budget and avoids the extra capacity and cost of xlarge instances. The key takeaway is to size resources against all constraints, not just one metric such as cost or CPU.


Question 14

Topic: Domain 5: DevOps Fundamentals

A team deployed a new checkout integration that must keep order processing loosely coupled from inventory, billing, and shipping. After deployment, checkout latency spikes and orders fail whenever the shipping service is slow. Logs show the order service making synchronous HTTP calls to each downstream service before returning a response.

What is the best next fix?

  • A. Increase the HTTP client timeout
  • B. Move all services to a shared database
  • C. Replace REST calls with WebSockets
  • D. Publish checkout events to a message broker

Best answer: D

Explanation: The stated requirement is loose coupling, but the symptom shows synchronous downstream calls blocking checkout. Publishing events through a broker or pub/sub pattern lets each consumer process independently without making checkout depend on shipping latency. The core concept is matching the integration pattern to the coupling and communication requirement. Synchronous REST or RPC-style calls are appropriate when the caller needs an immediate response from the callee, but they create runtime coupling: slow or unavailable downstream services can make the caller fail. For a checkout workflow that only needs to notify inventory, billing, and shipping, an asynchronous message broker or pub/sub event pattern is a better fit because producers and consumers do not need to be available at the same time. The key takeaway is to avoid synchronous request/response integration when the requirement is loosely coupled fan-out communication.


Question 15

Topic: Domain 1: Cloud Architecture

A healthcare company is hardening its private cloud virtualization cluster for a regulated database application. The VM disks must be accessible from multiple hypervisor hosts for HA failover and live migration, and the database requires low-latency block-level storage. Encryption and access controls will be centrally managed on the storage platform. Which storage architecture is the best fit?

  • A. Local NVMe storage in each hypervisor
  • B. NAS file shares mounted by each hypervisor
  • C. Object storage buckets for VM disks
  • D. SAN storage presenting shared LUNs

Best answer: D

Explanation: The key requirement is shared block-level storage for a virtualization cluster. A SAN can present shared LUNs to multiple hypervisors while supporting centralized controls, making it the best match for HA failover and live migration of database VMs. SAN storage is commonly used for virtualization workloads that need block-level access, consistent performance, and shared availability across multiple hosts. In this scenario, local storage would tie VM disks to a single hypervisor, limiting failover and migration. NAS can provide shared file access, but the stated workload specifically requires block storage for database VM disks. Centralized encryption and access controls can be applied on enterprise SAN platforms without sacrificing the access model required by the cluster. The deciding factor is the combination of shared host access and block-level VM storage.


Question 16

Topic: Domain 3: Operations

A cloud team is planning lifecycle changes for a production PaaS database used by a highly available web application. The change policy defines a minor update as a low-risk, backward-compatible change that needs only targeted validation. A major update may affect compatibility, data format, topology, or rollback and requires staging and regression testing. Which planned change should be classified as a major update?

  • A. Upgrade the database engine across major versions
  • B. Apply a provider security patch to the current engine version
  • C. Increase automated backup retention from 14 to 30 days
  • D. Rotate the database TLS certificate before expiration

Best answer: A

Explanation: The major-version database upgrade is the highest-risk lifecycle change because it can introduce compatibility and data-format changes. It should be validated in staging with regression testing before production rollout. Major and minor updates are distinguished by change risk and testing scope. Minor updates are usually backward-compatible operational changes, such as patching within the same version, adjusting retention, or rotating a certificate with a planned process. A major update changes a core platform component or behavior and can affect application drivers, SQL compatibility, stored procedures, data format, backup/restore behavior, or rollback plans. In this scenario, upgrading the database engine across major versions is the only change that clearly crosses that risk boundary and requires staging validation plus broader regression testing. The key takeaway is to classify updates by potential production impact, not just by how common the task is.


Question 17

Topic: Domain 6: Troubleshooting

During a blue-green migration of a web application, a cloud engineer signs in through SSO with MFA and runs an IaC plan against the new environment. The identity provider shows a successful login, but the deployment fails with 403: not permitted to create load balancer in this VPC. No unexpected resources are created. Which security issue do these symptoms indicate?

  • A. Authentication failure
  • B. Privilege escalation
  • C. Unauthorized access
  • D. Authorization failure

Best answer: D

Explanation: This is an authorization failure because the identity was verified successfully, but the requested cloud action was denied. The 403 permission error points to inadequate privileges for creating the load balancer, not a failed sign-in. Authentication verifies who the user or workload is, while authorization determines what that identity is allowed to do. In this scenario, SSO and MFA succeeded, so authentication is not the problem. The deployment reaches the cloud API but receives a 403 for creating a load balancer in a specific VPC, which means the authenticated identity lacks the required permission or role assignment. Privilege escalation would involve gaining higher permissions than intended, and unauthorized access would involve accessing resources without approval or valid entitlement. The key takeaway is that successful login followed by a permission denial is authorization troubleshooting evidence.


Question 18

Topic: Domain 4: Security

A company uses federated IAM for a SaaS analytics platform. Users sign in through a corporate identity provider with MFA. After sign-in, the platform evaluates group membership to grant read-only access to specific storage datasets, and all dataset access is written to audit logs. Which action represents authorization in this scenario?

  • A. Granting read-only dataset access based on group membership
  • B. Validating the user’s MFA challenge
  • C. Writing dataset access events to audit logs
  • D. Federating sign-in through the corporate identity provider

Best answer: A

Explanation: Authorization is the IAM function that determines permissions after identity is established. In this scenario, evaluating group membership and granting read-only access defines what the user can do with the datasets. Authentication verifies identity, authorization grants or denies access, and accounting records activity for audit and traceability. The user’s federated sign-in and MFA challenge are authentication steps because they prove who the user is. Evaluating group membership to grant read-only dataset access is authorization because it maps the verified identity to allowed actions and resources. Writing access events to audit logs is accounting because it tracks what happened after access was attempted or used. The key distinction is: prove identity first, then enforce permissions, then record activity.


Question 19

Topic: Domain 2: Deployment

A company is assessing migration options for a legacy order-processing workload. The application is still required by the business and has no approved SaaS replacement. It currently runs as a VM-based monolith that stores session state locally, but the target requirement is active-active availability across two cloud availability zones and independent scaling of the payment and inventory components.

Which migration disposition best fits these facts?

  • A. Retain the workload in its current environment
  • B. Retire the workload during the migration
  • C. Refactor the workload before or during migration
  • D. Rehost the workload unchanged to cloud VMs

Best answer: C

Explanation: The workload is still needed, so retiring it does not fit. It also cannot meet the stated active-active and independent scaling requirements without design changes, making refactor the best disposition. Migration disposition decisions match the workload’s business value and required changes. Retain means keep the workload where it is, usually because it is not ready or should not move yet. Retire means remove a workload that is no longer needed. Refactor means rework the application or architecture to meet new business or technical requirements, such as cloud-native scaling, stateless design, or multi-zone availability. In this scenario, the application is required and has no replacement, but its local session state and monolithic design conflict with the target architecture. The key takeaway is that required architectural change points to refactor, not retain, retire, or lift-and-shift.


Question 20

Topic: Domain 5: DevOps Fundamentals

A team is updating a CI/CD pipeline for a microservice that will run on a managed container platform. The deployment stage must pull a versioned, immutable artifact that includes the application code, runtime, and OS-level dependencies. The team does not want to provision or boot full VMs for each release. Which artifact should the pipeline publish?

  • A. Container image
  • B. VM image
  • C. ZIP file
  • D. Flat file

Best answer: A

Explanation: The requirement points to a container image because the workload runs on a managed container platform and needs an immutable artifact with code, runtime, and dependencies. A container image is designed to be pulled from an image registry and instantiated as containers without booting full virtual machines. Pipeline artifacts are the outputs passed from build to deployment stages. For containerized workloads, the deployable artifact is a container image, typically stored in an image registry and tagged by version or digest. It contains the application and its required runtime dependencies in a portable format that the container platform can pull and run. A VM image is also immutable, but it includes a full guest OS and is used to boot virtual machines, which the stem explicitly excludes. Archive files can carry application bits, but they do not directly provide the standardized runtime image expected by a container orchestrator.


Question 21

Topic: Domain 1: Cloud Architecture

A company must retain compressed application log bundles for 7 years after they age out of the active monitoring platform. The logs are accessed only for rare legal or audit requests, retrieval can take up to 12 hours, and the primary goal is minimizing ongoing storage cost. Which storage tier best fits these requirements?

  • A. Hot storage tier
  • B. Warm storage tier
  • C. Archive storage tier
  • D. Cold storage tier

Best answer: C

Explanation: Archive storage is the best fit for data that is retained long term, rarely accessed, and can tolerate delayed retrieval. The 12-hour recovery expectation makes slower restore acceptable in exchange for lower storage cost. Storage tiers balance access frequency, retrieval speed, and cost. Hot tiers are optimized for frequent access and low latency, while warm and cold tiers reduce cost as access becomes less frequent. Archive tiers are intended for long-term retention, compliance records, and data that is rarely read. Because the log bundles are accessed less than annually and can wait up to 12 hours for retrieval, the architecture can prioritize the lowest recurring storage cost over immediate availability. The key takeaway is that archive is appropriate only when delayed recovery is acceptable; otherwise, cold or warmer tiers may be needed.


Question 22

Topic: Domain 3: Operations

A company runs an order-processing application on IaaS VMs with a managed relational database. The business requires recovery from a full regional outage with an RTO of 30 minutes and an RPO of 5 minutes. Management wants to avoid the cost of running a full active-active deployment. Which backup and recovery approach is the BEST fit?

  • A. Nightly full backups to archive storage
  • B. Warm standby with cross-region replication
  • C. Hourly snapshots in the primary region
  • D. Cold site restored from daily images

Best answer: B

Explanation: The RTO requires the application to be restored quickly, and the RPO requires very little data loss. A warm standby in another region with continuous or near-real-time database replication best balances those recovery targets with the requirement to avoid active-active cost. RTO defines how quickly service must be restored, while RPO defines how much data loss is acceptable. A 30-minute RTO is too aggressive for a cold recovery that requires provisioning and restoring everything after an outage. A 5-minute RPO also requires frequent or continuous replication, not daily or hourly backups. A warm standby keeps enough compute, networking, configuration, and replicated data ready in a secondary region so the environment can be promoted quickly during a regional failure. It is less costly than full active-active because the standby environment does not have to carry normal production traffic all the time. The key takeaway is to match recovery design to both time-to-restore and data-loss limits.


Question 23

Topic: Domain 6: Troubleshooting

A cloud operations team is tuning alert routing for a centralized logging platform. An alert was generated after a contractor reported being unable to rotate an encryption key.

Log excerpt:

Federated sign-in: success
MFA result: success
Issued role: app-support-readonly
API action: RotateKey
Decision: Deny
Reason: No matching allow in IAM policy
Source network: expected VPN range
Role changes: none

Which classification should the operations team use to route this alert?

  • A. Authentication failure
  • B. Privilege escalation
  • C. Authorization failure
  • D. Unauthorized access

Best answer: C

Explanation: This is an authorization failure because the contractor authenticated successfully but lacked the required permission. The logs show successful federated sign-in and MFA, followed by an IAM policy denial for the requested action. Authentication answers the question, “Are you who you claim to be?” Authorization answers, “Are you allowed to do this?” In this case, the user passed federation and MFA, so authentication succeeded. The API request was denied because the assigned role did not allow RotateKey, making the symptom an authorization failure. The key operational takeaway is to route the alert toward access policy review rather than identity verification or incident containment for compromised access.


Question 24

Topic: Domain 4: Security

A company is migrating a CRM application from a private cloud to public IaaS using IaC templates. The security team requests MFA for administrators, encrypted storage, and a WAF. The legal team states that customer records are subject to a data-sovereignty regulation requiring regulated data to remain in approved national regions. Which migration requirement should be classified as compliance-driven rather than an ordinary security best practice?

  • A. Enforce MFA for all administrator accounts
  • B. Deploy CRM data resources only in approved national regions
  • C. Encrypt all attached block storage volumes
  • D. Place a WAF in front of the CRM application

Best answer: B

Explanation: Compliance requirements come from laws, regulations, contracts, or formal standards that the organization must satisfy. In this scenario, the data-sovereignty rule specifically dictates where regulated customer records may reside during the migration. The core distinction is source of obligation. MFA, encryption, and WAF protection are strong security best practices and may support compliance, but the stem does not state that a regulation requires those specific controls. The legal team explicitly identifies a data-sovereignty regulation requiring customer records to stay in approved national regions, so the migration design and IaC templates must enforce that placement constraint. For migration planning, compliance-driven requirements should be traceable to the applicable law or policy and documented as acceptance criteria.

Cloud+ CV0-004 cloud operations map

Use this map after the sample questions to connect individual items to the vendor-neutral cloud architecture, security, deployment, and operations decisions these practice samples test.

    flowchart LR
	  S1["Cloud business requirement"] --> S2
	  S2["Choose architecture and deployment model"] --> S3
	  S3["Secure identity network and data"] --> S4
	  S4["Automate operations and cost controls"] --> S5
	  S5["Monitor troubleshoot and recover"] --> S6
	  S6["Improve resilience and governance"]

Quick Cheat Sheet

CueWhat to remember
ArchitectureMatch public, private, hybrid, multicloud, IaaS, PaaS, SaaS, containers, and serverless to requirements.
SecurityReview IAM, network segmentation, encryption, key handling, vulnerability management, and compliance.
OperationsUnderstand monitoring, logging, backup, patching, scaling, automation, and incident response.
ResilienceConnect RTO, RPO, backups, replication, warm standby, and failover to business targets.
TroubleshootingClassify evidence before acting: authentication, authorization, network, quota, capacity, DNS, or service failure.

Mini Glossary

  • IaaS: Infrastructure as a service: cloud compute, storage, and network resources managed by the customer above the platform.
  • RPO: Recovery point objective: acceptable data-loss window.
  • RTO: Recovery time objective: acceptable restoration time.
  • Warm standby: Partially running recovery environment ready for faster failover than a cold site.
  • WAF: Web application firewall used to filter and protect web traffic.

In this section

Revised on Friday, May 15, 2026