Try 10 focused CompTIA Cloud+ CV0-004 questions on Devops Fundamentals, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try CompTIA Cloud+ CV0-004 on Web View full CompTIA Cloud+ CV0-004 practice page
| Field | Detail |
|---|---|
| Exam route | CompTIA Cloud+ CV0-004 |
| Topic area | Devops Fundamentals |
| Blueprint weight | 10% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Devops Fundamentals for CompTIA Cloud+ CV0-004. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 10% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Devops Fundamentals
A team’s CI/CD pipeline compiles code, runs unit tests, scans the source, and then starts the production deployment. The deployment job builds a container image on the deploy runner and immediately tries to release it. The release fails with: Policy denied: image digest is not in the approved artifact registry.
Which next fix best addresses the pipeline design issue?
Options:
A. Build and publish the image before security validation
B. Move unit tests into the production deployment job
C. Disable artifact approval for deployment jobs
D. Run security scans only after production release
Best answer: A
Explanation: The failure shows the deployment job is trying to create and release an artifact that was never approved. In a CI/CD workflow, artifacts should be built and stored before deployment, then validated and promoted by later stages using the same immutable reference.
CI/CD stages should separate artifact creation from deployment. The build stage compiles code and creates the deployable artifact, such as a container image. Testing and security validation should run against that artifact, and successful results should allow it to be published or marked approved in an artifact registry. The deployment stage should consume the approved artifact by tag or digest, not build a new image on the production deploy runner. This preserves traceability, repeatability, and policy enforcement.
Moving tests or scans after release weakens the pipeline, and bypassing artifact approval removes the control that detected the problem.
Topic: Devops Fundamentals
A cloud team integrates several microservices by having the checkout service call inventory, billing, and email APIs synchronously during each order. During promotions, checkout latency spikes and many requests fail with HTTP 429 from the email API, even though orders do not require the email confirmation to be sent before checkout completes. Which next fix best addresses the integration issue?
Options:
A. Move all services to a single VM
B. Increase the checkout API timeout
C. Publish order events to a message broker
D. Add a regional DNS failover record
Best answer: C
Explanation: Event-driven integration is appropriate when producers and consumers can be decoupled and downstream processing does not need to block the user transaction. In this case, email confirmation can occur asynchronously, so publishing order events reduces checkout latency and shields the workflow from email API throttling.
The core concept is using an event-driven architecture for asynchronous cloud system integration. Checkout should complete the order, then emit an event such as OrderCreated for other services to consume. A broker or queue can buffer spikes, let the email service process at its own rate, and prevent a throttled noncritical API from failing the main checkout path. This pattern is best when work is loosely coupled, bursty, or eventually consistent. It is not the best fit when every downstream response is required before the caller can continue.
Topic: Devops Fundamentals
A cloud engineering team uses IaC to create VM instances, subnets, and load balancers. After provisioning, auditors require approved OS settings, monitoring agents, and service configuration to be applied consistently across hundreds of instances. Changes must be versioned, repeatable, and performed without interactive administrator logins. Which DevOps tool purpose best implements this requirement?
Options:
A. Observability platform
B. Container image registry
C. Source control repository
D. Configuration management / CaC tool
Best answer: D
Explanation: This task is about enforcing configuration state across provisioned systems, not creating the cloud resources themselves. Configuration management or Configuration as Code tools are used to apply repeatable OS, package, agent, and service settings in a version-controlled way.
DevOps tool roles separate resource provisioning from system configuration. IaC commonly creates infrastructure objects such as networks, instances, and load balancers. Configuration management or CaC then applies desired state to the running systems, such as installing agents, setting OS baselines, and managing service configuration. This supports repeatability, auditability, and reduced manual access because configuration changes can be reviewed, versioned, and executed consistently across many hosts.
The key distinction is that the requirement focuses on post-provisioning configuration drift control and repeatable system state, not storing code, collecting telemetry, or distributing container images.
Topic: Devops Fundamentals
A cloud team has already packaged a microservices application into container images. The operations requirement is to keep multiple replicas running across several compute nodes, restart failed containers, expose stable service endpoints, and perform rolling updates with minimal downtime. Which tool best manages this requirement?
Options:
A. Kubernetes
B. Container registry
C. Git repository
D. Docker image build
Best answer: A
Explanation: Kubernetes is the best fit because the requirement is container orchestration, not container packaging. The application images already exist, and the team needs operational control over replicas, placement, service endpoints, and rolling updates across nodes.
Docker is commonly used to build, package, and run individual containers, while Kubernetes is used to orchestrate containers at scale. In this scenario, the decisive requirement is not creating an image; it is managing running workloads across multiple compute nodes. Kubernetes handles scheduling, desired replica counts, health-based restarts, service discovery, and rolling deployments, which directly match the operational goals in the stem.
The key distinction is packaging versus orchestration: Docker prepares and runs containers, while Kubernetes manages groups of containers as a resilient distributed application.
Topic: Devops Fundamentals
A company is modernizing an order-processing platform. New orders must trigger inventory checks, fraud scoring, email confirmation, and analytics updates. Each function is owned by a different team, traffic is bursty, and the customer-facing checkout page should not wait for every downstream task to complete. Which integration approach best fits these architecture facts?
Options:
A. Run a nightly batch export job
B. Call each service synchronously from checkout
C. Publish order events to a message broker
D. Share one database across all services
Best answer: C
Explanation: Event-driven architecture is appropriate when cloud components need loose coupling and asynchronous reactions to state changes. In this scenario, order creation is the event, and downstream teams can process their own work without blocking checkout.
Event-driven integration uses events, queues, topics, or brokers so a producer can announce that something happened without knowing every consumer or waiting for each one. This fits bursty, cloud-native workflows where independent services need to react to the same business event, scale separately, and tolerate temporary downstream slowness. The checkout service can publish an order-created event and return quickly, while inventory, fraud, email, and analytics consumers process the event independently. Synchronous API calls are better when an immediate response is required from a specific service before continuing.
Topic: Devops Fundamentals
A cloud engineering team is exposing an integration layer for a containerized order service that will be consumed by web, mobile, and partner applications. The requirements are stateless HTTPS calls, resource-oriented endpoints for orders and customers, simple JSON payloads, and compatibility with standard load balancers and CDN caching where possible. Which web service pattern is the BEST fit?
Options:
A. JSON-RPC API
B. RESTful API
C. SOAP web service
D. gRPC service
Best answer: B
Explanation: A RESTful API is the best fit for resource-oriented, stateless communication over HTTPS with lightweight JSON payloads. REST commonly uses standard HTTP methods and status codes, making it well suited for web, mobile, partner, load balancer, and CDN-friendly integrations.
REST, SOAP, and RPC are different integration patterns. REST models resources, such as orders and customers, and uses standard HTTP methods like GET, POST, PUT, and DELETE to operate on those resources. That matches the stem’s requirements for stateless HTTPS, JSON payloads, and cache-friendly behavior. SOAP is more formal and XML-based, often using WSDL and envelope messaging. RPC patterns focus on calling named procedures or methods, which can be efficient but are less naturally resource-oriented. The key takeaway is that REST is usually the cloud-friendly choice when the integration is resource-based, stateless, HTTP-native, and intended for broad client compatibility.
Topic: Devops Fundamentals
A company is modernizing an order-processing platform that uses several cloud-hosted microservices. New orders must trigger inventory updates, fraud checks, shipment requests, and customer notifications. Each downstream service should scale independently, temporary outages should not block order intake, and new services should be added later with minimal changes to the order service. Which integration approach is the BEST fit?
Options:
A. Publish order events to a message broker or event bus
B. Run a nightly batch job to export new orders
C. Share one relational database schema across all services
D. Call each downstream service synchronously using REST APIs
Best answer: A
Explanation: Event-driven architecture is appropriate when cloud services need asynchronous, loosely coupled integration. In this scenario, order intake should continue even if downstream services are slow or temporarily unavailable, and new consumers should be easy to add without changing the producer.
Event-driven integration uses events, queues, topics, or an event bus so a producer can publish a business fact, such as “order created,” without directly controlling every consumer. This fits cloud microservices that have variable demand, need independent scaling, and should tolerate temporary downstream failures through buffering and retry behavior. It also supports future extensibility because additional services can subscribe to the same event stream without requiring the order service to add more point-to-point logic.
Synchronous REST is useful for immediate request/response interactions, but it tightly couples order intake to each downstream service’s availability and latency. The key takeaway is that event-driven architecture is best when integration should be asynchronous, decoupled, scalable, and extensible.
Topic: Devops Fundamentals
A cloud team manages IaC templates and policy-as-code files for network security groups, IAM roles, and storage encryption settings. An auditor requires evidence of peer review, a history of who changed each deployment asset, and a quick way to restore the last approved version after a faulty change. Which source control workflow best meets these requirements?
Options:
A. Allow administrators to commit directly to the main branch
B. Overwrite the same object storage key after each deployment
C. Store approved templates in a shared cloud file folder
D. Use protected main branch with pull requests and release tags
Best answer: D
Explanation: A protected branch workflow with pull requests gives the team controlled collaboration, review evidence, and change history. Release tags or versioned commits provide stable rollback points for cloud deployment assets such as IaC and policy-as-code files.
The core concept is using source control workflow controls to support collaboration and rollback. For cloud deployment assets, a typical Git-based workflow uses feature branches for proposed changes, pull requests for peer review and discussion, protected main branches to prevent unreviewed changes, and tags or releases to mark known-good deployment versions. This creates an auditable history of who changed what and when, while allowing the team to revert a commit or redeploy a tagged version after a faulty change. Shared folders or overwritten objects may store files, but they do not provide the same review workflow, commit history, and rollback precision.
Topic: Devops Fundamentals
A team is updating a CI/CD pipeline for a microservice that will run on a managed container platform. The deployment stage must pull a versioned, immutable artifact that includes the application code, runtime, and OS-level dependencies. The team does not want to provision or boot full VMs for each release. Which artifact should the pipeline publish?
Options:
A. Container image
B. Flat file
C. ZIP file
D. VM image
Best answer: A
Explanation: The requirement points to a container image because the workload runs on a managed container platform and needs an immutable artifact with code, runtime, and dependencies. A container image is designed to be pulled from an image registry and instantiated as containers without booting full virtual machines.
Pipeline artifacts are the outputs passed from build to deployment stages. For containerized workloads, the deployable artifact is a container image, typically stored in an image registry and tagged by version or digest. It contains the application and its required runtime dependencies in a portable format that the container platform can pull and run. A VM image is also immutable, but it includes a full guest OS and is used to boot virtual machines, which the stem explicitly excludes. Archive files can carry application bits, but they do not directly provide the standardized runtime image expected by a container orchestrator.
Topic: Devops Fundamentals
A cloud operations team must keep existing IaaS Linux VMs aligned to a CIS-based hardening baseline after each monthly patch cycle. The team wants repeatable enforcement of OS settings and packages without relying on manual console changes or tool-specific scripts in the exam question. Which DevOps tool purpose best fits this requirement?
Options:
A. Container runtime for packaging application images
B. Source control for branching and pull requests
C. Configuration management for idempotent baseline enforcement
D. Dashboarding for visualizing security metrics
Best answer: C
Explanation: The requirement is to keep existing VMs compliant with a hardening baseline over time. Configuration management is the DevOps tool role that repeatedly enforces desired system state, such as packages, services, file permissions, and security settings.
Configuration management tools support compliance and hardening by defining the desired state of systems and applying it consistently. In this scenario, the key need is not just visibility or code storage; it is repeatable enforcement after patching. Idempotent configuration management can remediate drift by reapplying approved OS settings and package states across the VM fleet. This aligns with CIS-style baseline maintenance without requiring candidates to know vendor-specific syntax or commands. The main distinction is that configuration management maintains system configuration, while other DevOps tools may store, build, package, or display information.
Use the CompTIA Cloud+ CV0-004 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try CompTIA Cloud+ CV0-004 on Web View CompTIA Cloud+ CV0-004 Practice Test
Read the CompTIA Cloud+ CV0-004 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.