CompTIA Cloud+ CV0-004: Security

Try 10 focused CompTIA Cloud+ CV0-004 questions on Security, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try CompTIA Cloud+ CV0-004 on Web View full CompTIA Cloud+ CV0-004 practice page

Topic snapshot

FieldDetail
Exam routeCompTIA Cloud+ CV0-004
Topic areaSecurity
Blueprint weight19%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Security for CompTIA Cloud+ CV0-004. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 19% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Security

A company stores customer contracts and related audit logs in cloud object storage for a SaaS application. Requirements state that records must be retained for 7 years, moved to an archive tier after 180 days, and specific customer records must not be deleted while an active lawsuit is pending, even if the 7-year period expires. Which approach BEST satisfies these requirements?

Options:

  • A. Delete records after 7 years and document lawsuit exceptions manually

  • B. Move all records to archive storage and disable user deletion

  • C. Enable immutable retention and apply legal holds to lawsuit records

  • D. Rely on nightly backups with a 7-year expiration policy

Best answer: C

Explanation: The best fit is immutable retention for the required retention period plus legal holds for records involved in litigation. Lifecycle tiering can still move data to archive storage without allowing deletion before retention or legal hold requirements are met.

Regulatory or contractual retention defines how long data must be preserved, while litigation hold overrides normal deletion schedules for data relevant to a legal matter. In cloud object storage, this is commonly implemented with immutable retention, WORM-style controls, retention policies, and object-level legal holds. The lifecycle policy can move older objects to an archive tier after 180 days, but it should not remove or alter the required preservation controls.

The key takeaway is that retention and lifecycle tiering are different controls: tiering changes storage class, while legal hold and immutable retention control deletion.

  • Backups alone may help recovery, but they do not reliably enforce object-level legal holds or prevent deletion of primary records.
  • Archive only reduces storage cost, but it does not by itself satisfy litigation hold or immutable retention requirements.
  • Manual exceptions are weak because lawsuit records must be technically protected from deletion, not just documented.

Question 2

Topic: Security

A company is integrating a third-party reporting application with its cloud file-sharing service. Employees must be able to approve read-only access to their own files, the application must not receive employee passwords, and access must be revocable without changing the employee account. Which authorization model should be implemented?

Options:

  • A. OAuth 2.0 with scoped access tokens

  • B. Group-based access control for employees

  • C. Discretionary access control on each file

  • D. Role-based access control for application users

Best answer: A

Explanation: OAuth 2.0 is the best fit for delegated authorization to a third-party application. It allows users to consent to limited scopes, such as read-only file access, without exposing their credentials to the application.

OAuth 2.0 is used when one application needs authorized access to resources on behalf of a user. In this scenario, the key requirements are delegated access, no password sharing, limited permissions, and revocation. Scoped access tokens meet those requirements because the cloud file-sharing service can issue a token that grants only read-only access and can later revoke or expire that token without changing the employee’s password or account membership.

RBAC and group-based access control are better for assigning permissions to internal users or service identities based on roles or groups. Discretionary access control lets resource owners grant access, but it does not inherently solve third-party delegated authorization without password sharing.

  • RBAC mismatch: roles assign permissions to principals, but they do not provide user consent and scoped delegation to a third-party app.
  • Group access mismatch: groups simplify membership-based permissions, but they do not address delegated app access without passwords.
  • DAC limitation: discretionary sharing may let owners grant access, but it lacks OAuth’s standard token-based delegation and revocation model.

Question 3

Topic: Security

A cloud administrator receives a DLP alert after a nightly export job writes customer tax records to object storage. Company policy requires this data to be classified as Restricted, retained for 7 years, and stored only in approved regions. The destination bucket has no classification tag, and its lifecycle rule transitions all objects to an archive bucket in an unapproved region after 30 days.

What is the MOST likely compliance risk?

Options:

  • A. Incorrect data classification and lifecycle movement

  • B. Missing VM patch baseline

  • C. Overly permissive security group ingress

  • D. Insufficient WAF rule coverage

Best answer: A

Explanation: The risk is caused by regulated data being stored without the required classification and then moved by lifecycle rules. Because the data must remain in approved regions and be retained for 7 years, the current storage and retention configuration violates policy.

Compliance controls for sensitive data depend on correct classification, storage location, and retention handling. In this scenario, customer tax records should be classified as Restricted, kept for 7 years, and limited to approved regions. Because the bucket has no classification tag, the lifecycle rule treats the records like ordinary objects and moves them to an archive bucket in an unapproved region after 30 days. The key issue is not perimeter filtering or host hardening; it is a data governance failure tied to classification and retention policy enforcement.

The takeaway is that DLP findings should be traced to where sensitive data is stored, how it is labeled, and which lifecycle or replication rules can move it.

  • WAF coverage protects web applications from HTTP-based attacks, but it does not control object classification, residency, or retention.
  • Patch baselines reduce vulnerability exposure on compute resources, but the alert is about regulated data handling.
  • Security group ingress controls network access, but the violation is caused by storage lifecycle behavior and missing classification.

Question 4

Topic: Security

A company uses federated SSO for cloud administrators and must support quarterly access reviews and investigations of privileged changes. During an audit, the team cannot determine which administrator modified role assignments because identity events are retained only on individual systems for seven days. Which control should the cloud security engineer implement?

Options:

  • A. Mandatory MFA for privileged users

  • B. Centralized immutable IAM audit logging

  • C. Container image vulnerability scanning

  • D. Encryption for administrator workstations

Best answer: B

Explanation: The requirement is IAM accounting: recording who did what, when, and from where. Centralized immutable audit logging preserves identity and administrative activity long enough to support access reviews and investigations.

Accounting in IAM focuses on collecting and retaining activity records that prove how identities and privileges were used. For privileged role changes, the most useful control is centralized audit logging for authentication, authorization, and administrative API events, with appropriate retention and tamper resistance. This lets reviewers trace changes to a federated user, timestamp, source, and action even if individual systems rotate local logs quickly.

MFA and encryption are important preventive controls, but they do not create the audit trail needed to reconstruct privileged access changes. The key takeaway is that access review and investigation requirements depend on complete, retained, and protected accounting records.

  • MFA only improves authentication assurance but does not preserve records of role changes for later review.
  • Image scanning supports vulnerability management, not IAM accounting or privileged access investigations.
  • Workstation encryption protects endpoint data at rest but does not centralize or retain cloud identity events.

Question 5

Topic: Security

A company stores customer case files and financial transaction records in cloud object storage. A customer contract requires case files to be deleted after 3 years unless legally restricted, while financial regulations require transaction records to be retained for 7 years. The legal department issues a litigation hold for a specific customer’s case files. Which action best satisfies the cloud data handling requirements?

Options:

  • A. Delete all case files at 3 years, including files under litigation hold.

  • B. Move held case files to archive storage and disable all retention policies.

  • C. Apply a legal hold to the affected case files and enforce the 7-year retention policy for transaction records.

  • D. Retain all customer data indefinitely to avoid accidental deletion.

Best answer: C

Explanation: Litigation hold overrides normal deletion for the specific data involved in legal proceedings. Regulatory retention still applies separately to financial records, so transaction records must be kept for the required 7 years.

The core concept is applying the most restrictive applicable retention requirement to each data set. The specific customer case files under litigation hold must not be deleted or altered while the hold is active, even if the normal 3-year contractual deletion period arrives. Financial transaction records are governed by the 7-year regulatory requirement, so retention controls should preserve them for that minimum period. Retention should be targeted rather than indefinite, because over-retention can increase compliance, privacy, and discovery risk. The key is to use cloud retention controls, legal hold, and lifecycle policies in a way that matches each data classification and legal obligation.

  • Contract-only deletion fails because litigation hold prevents deletion of the affected case files.
  • Indefinite retention is excessive and may violate data minimization or contractual deletion expectations.
  • Archive without policy may reduce storage cost but does not enforce legal or regulatory retention requirements.

Question 6

Topic: Security

A cloud operations team needs a junior administrator to restart virtual machines and view monitoring data for one production resource group. The administrator must not be able to modify IAM policies, change network security rules, or manage billing. Which access approach best meets the requirement?

Options:

  • A. Grant global administrator access with MFA enabled

  • B. Assign a scoped operations role to the resource group

  • C. Add the administrator to the cloud account owners group

  • D. Create a shared administrator account for the team

Best answer: B

Explanation: Least privilege means granting only the permissions required for the task and only at the needed scope. A resource-group-scoped operations role supports VM restarts and monitoring while avoiding access to IAM, network security, and billing functions.

The core IAM concept is least privilege with scoped role assignment. The requirement is operational: restart virtual machines and view monitoring data for one production resource group. Assigning a role at that resource group limits the administrator’s effective permissions to that boundary and avoids unrelated high-risk capabilities such as IAM administration, security rule changes, and billing management. MFA is important, but it does not reduce excessive authorization. Shared accounts also break accountability because actions cannot be reliably tied to one user. The key takeaway is to combine the minimum required role with the narrowest practical scope.

  • Owner access is too broad because it commonly includes control over IAM and other management-plane functions.
  • Shared administration fails accountability and makes auditing user actions difficult.
  • Global administrator remains excessive even if MFA is enabled, because authentication strength does not justify unnecessary authorization.

Question 7

Topic: Security

A database reporting team authenticates successfully through the corporate OIDC identity provider, but members receive 403 Forbidden when querying a managed cloud database. The access logs show the token is accepted and MFA is satisfied.

Token claim: groups = ["db-reporting-readers"]
Database IAM policy: allow read for group "db-report-readers"
Requirement: maintain least privilege

What should the cloud administrator do to restore access?

Options:

  • A. Assign a broad database administrator role

  • B. Correct the IAM policy group mapping

  • C. Disable MFA for the reporting team

  • D. Rotate the database encryption key

Best answer: B

Explanation: This is an authorization failure, not an authentication failure. The identity provider issued an accepted token, but the IAM policy checks the wrong group value, so least-privilege access is denied.

OIDC authentication proves the user’s identity and supplies claims, such as group membership. Authorization then compares those claims to IAM roles or policies. In this case, MFA succeeded and the token was accepted, so the failure occurs when the database policy evaluates the group claim. The policy should be corrected to match the synced IdP group used in the token and grant only the required read permission.

The key takeaway is to fix the role, group, or policy mapping rather than weakening authentication or overgranting access.

  • MFA change fails because MFA already succeeded and is not the source of the 403 Forbidden response.
  • Broad admin role violates least privilege and grants more access than the reporting team requires.
  • Key rotation does not address an IAM authorization mismatch.

Question 8

Topic: Security

A cloud security team is investigating suspicious outbound connections from an IaaS web server. IDS logs show requests to /vendor/oldplugin/upload.php followed by a shell process starting under the web service account. The asset inventory shows plugin version 2.1, and the vendor bulletin states versions before 2.4 allow unauthenticated file upload. Which cause best matches this incident?

Options:

  • A. Excessive IAM permissions on the instance role

  • B. Failure to rotate encryption keys

  • C. Exploitation of unpatched, outdated software

  • D. DDoS traffic against the load balancer

Best answer: C

Explanation: The incident shows a known vulnerable software version being actively exploited. The matching vendor bulletin, outdated plugin version, and shell process after an upload request point to exploitation caused by missing patches.

Vulnerability exploitation from outdated software occurs when a workload runs a version with a known flaw and an attacker uses that flaw to gain access or execute code. In this case, the suspicious request targets a vulnerable plugin path, the installed version is below the fixed version, and the resulting shell process is consistent with remote code execution or web shell deployment. The key signal is the match between observed activity and a published vulnerability affecting the installed version.

IAM, encryption, and DDoS controls are important, but they do not explain a shell launched through a vulnerable application upload endpoint.

  • IAM overfocus fails because the evidence points to application-layer exploitation, not role misuse.
  • DDoS mismatch fails because the symptom is code execution on a server, not traffic volume exhaustion.
  • Key rotation issue fails because stale encryption keys do not explain the vulnerable upload path or shell process.

Question 9

Topic: Security

During a migration to a public IaaS environment, a team configures an OIDC federation between the corporate identity provider and the cloud IAM service for an IaC deployment role. Engineers can sign in through SSO, but the provisioning job fails with AccessDenied: token audience does not match role trust policy. The token shows audience iac-deploy, while the role trust policy expects cloud-console.

What should the cloud administrator do to restore access for the provisioning job?

Options:

  • A. Add an allow rule to the subnet network ACL.

  • B. Change the deployment from canary to rolling.

  • C. Move the engineers to a local IAM user group.

  • D. Update the role trust policy to trust the iac-deploy audience.

Best answer: D

Explanation: The failure is caused by an OIDC token claim mismatch, not by a network or rollout issue. Because the provisioning job presents a token with the iac-deploy audience, the role trust policy must trust that audience before the job can assume the deployment role.

Federated IAM access depends on the identity provider token claims matching the IAM role trust conditions. In this case, authentication succeeds through SSO, so the identity provider is reachable and users are valid. The failure occurs when the IaC job tries to assume a role and the IAM service compares the token audience to the role trust policy. Since iac-deploy does not match cloud-console, the role assumption is denied. Updating the trust policy, or the IdP application configuration, so both sides use the same intended audience fixes the authorization path for the provisioning workload. Network ACLs and deployment strategies do not affect OIDC claim validation.

  • Local IAM users bypass federation and weaken centralized identity management without fixing the trust mismatch.
  • Rolling deployment changes release behavior, not the IAM role assumption conditions.
  • Network ACL rule would help with traffic filtering issues, but this error is an IAM token validation failure.

Question 10

Topic: Security

A cloud team’s IaC pipeline fails while deploying object storage for a new document-processing application. The compliance validation step reports: Retention policy not applied: data category is undefined; locality and sovereignty rules cannot be evaluated. The application will store employee contracts, public marketing files, and application debug logs.

What is the best next fix?

Options:

  • A. Increase the object storage service quota

  • B. Disable the compliance validation step

  • C. Classify each dataset before mapping controls

  • D. Deploy all storage in the closest region

Best answer: C

Explanation: The failure indicates that compliance automation cannot determine which controls to apply because the data has not been classified. Employee contracts, public files, and debug logs may require different retention, residency, and regulatory handling.

Data classification is the prerequisite for applying compliance controls correctly. Before configuring retention, locality, sovereignty, or industry-specific requirements, the organization must identify the sensitivity and regulatory category of each dataset, such as confidential HR records, public content, or operational logs. Once classified, policies can map each class to the correct storage location, retention period, encryption requirements, and access controls.

Choosing a region or quota change does not resolve an undefined data category. The key takeaway is that controls should follow classification, not replace it.

  • Quota change misses the symptom because the validation failure is policy-related, not capacity-related.
  • Closest region may reduce latency but could violate sovereignty or residency requirements for classified data.
  • Disabling validation bypasses the control that prevents noncompliant storage deployment.

Continue with full practice

Use the CompTIA Cloud+ CV0-004 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try CompTIA Cloud+ CV0-004 on Web View CompTIA Cloud+ CV0-004 Practice Test

Free review resource

Read the CompTIA Cloud+ CV0-004 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026