Google Cloud Associate Cloud Engineer: Cloud Operations

Try 10 focused Google Cloud Associate Cloud Engineer questions on Cloud Operations, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try Google Cloud Associate Cloud Engineer on Web View full Google Cloud Associate Cloud Engineer practice page

Topic snapshot

FieldDetail
Exam routeGoogle Cloud Associate Cloud Engineer
Topic areaEnsuring Successful Operation of a Cloud Solution
Blueprint weight25%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Ensuring Successful Operation of a Cloud Solution for Google Cloud Associate Cloud Engineer. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Ensuring Successful Operation of a Cloud Solution

Your company stores customer payment reports in a single Cloud Storage bucket that currently uses the default Google-managed encryption keys. A new compliance requirement states that all such data must use customer-managed encryption keys (CMEK) with clear separation between storage admins and key admins. The security team also wants to minimize additional operational complexity and avoid managing hardware or on-premises key systems. How should you update the design?

Options:

  • A. Configure the bucket to use a Cloud KMS key (CMEK) in a dedicated key project, with a separate IAM group managing the key and storage admins having no permissions on the key.

  • B. Configure client-side encryption before uploading objects and store the encryption keys in configuration files on an internal file server managed by the security team.

  • C. Integrate Cloud Storage with an external on-premises hardware security module using an external key manager and implement a custom high-availability setup for key access.

  • D. Keep using Google-managed encryption keys on the bucket, but enable Object Versioning and a retention policy to strengthen data protection.

Best answer: A

Explanation: Cloud Storage always encrypts data at rest, but you can choose between Google-managed keys (the default) and customer-managed encryption keys (CMEK) using Cloud KMS or an external key manager. In this scenario, a new compliance requirement explicitly demands customer-managed encryption keys and separation of duties between those who manage storage and those who manage keys. At the same time, the security team wants to avoid unnecessary operational complexity.

Using a Cloud KMS key as CMEK directly on the Cloud Storage bucket satisfies the compliance need for customer-managed keys while keeping everything in managed Google Cloud services. Placing the key in a dedicated key project and granting key-admin roles to a separate IAM group from the storage admins enforces separation of duties. This design improves security and compliance with a single, manageable change.

Other options either keep Google-managed encryption (which fails the requirement for CMEK) or add significant complexity by managing keys outside standard Cloud KMS usage, which conflicts with the requirement to minimize operational overhead.


Question 2

Topic: Ensuring Successful Operation of a Cloud Solution

Which statement best describes the primary reason to use Cloud KMS with customer‑managed encryption keys (CMEK) instead of the default Google‑managed encryption keys for a Google Cloud service?

Options:

  • A. To improve performance and reduce latency of disk and database encryption compared to Google‑managed encryption keys

  • B. To enable encryption at rest for services that would otherwise store data in plaintext on Google Cloud

  • C. To avoid configuring IAM roles for security by relying on key permissions instead of resource‑level access controls

  • D. To meet regulatory or internal compliance requirements by controlling your own encryption keys and enforcing separation of duties between data owners and key administrators

Best answer: D

Explanation: Cloud Key Management Service (Cloud KMS) is Google Cloud’s managed service for creating, storing, and controlling cryptographic keys. Many Google Cloud services support customer‑managed encryption keys (CMEK), which lets you use keys you manage in Cloud KMS instead of the default Google‑managed keys.

In most cases, services are already encrypted at rest using Google‑managed keys, so you do not need to do anything to get basic encryption. CMEK becomes important when regulations, contracts, or internal security policies require you to control key lifecycle (creation, rotation, disabling, and destruction) and to separate key administration from data or project administration. This separation of duties allows, for example, a security team to control keys in one project while an application team manages the data in another project.

Using CMEK does not primarily change performance, nor does it replace IAM policies on the underlying resources. It adds an extra layer of control and auditability over the keys used for encryption, which is why it is often required in regulated industries such as finance, healthcare, or government.


Question 3

Topic: Ensuring Successful Operation of a Cloud Solution

Users in the data-analysts@example.com group now get 403 PERMISSION_DENIED when listing objects in a Cloud Storage bucket. In the bucket IAM policy, their Storage Object Viewer role has an IAM Condition that allowed access only until yesterday. You must restore access while keeping permissions limited to this bucket. What should you do?

Options:

  • A. Add a new bucket-level binding that grants the group the Storage Admin role without any IAM Condition.

  • B. Ask users to refresh their authentication credentials, because IAM Conditions are evaluated only when tokens are issued.

  • C. Update the IAM Condition on the bucket-level binding for the group to extend or remove the expiration date while keeping the Storage Object Viewer role on that bucket.

  • D. Grant the data-analysts@example.com group the Storage Object Viewer role on the entire project without any IAM Conditions.

Best answer: C

Explanation: IAM Conditions let you make access decisions based on additional context, such as time, resource name, or request attributes. In this scenario, the group already has the correct role (Storage Object Viewer) on the correct resource (the specific bucket), but that binding is restricted by a time-based condition that has now expired.

When the expiration time is reached, the condition evaluates to false and the binding no longer grants any effective permission, causing 403 PERMISSION_DENIED errors even though the role appears in the IAM policy. To fix this while preserving least privilege, you should modify the existing conditional binding: extend the allowed time window or remove the expiration from the IAM Condition. This restores access only to that bucket and keeps their permissions read-only.

Granting broader roles at the project level or elevating to Storage Admin would technically restore access but would violate the requirement to keep access limited to the bucket and to necessary permissions only. Refreshing authentication does not help because IAM Conditions are checked at request time against the current policy and context, not just when tokens are created.


Question 4

Topic: Ensuring Successful Operation of a Cloud Solution

Your company is deploying a new Cloud SQL for PostgreSQL instance for a financial application. A regulation requires that your organization controls the encryption keys, can rotate or disable them independently of database administrators, and can centrally audit key usage. What should you do to meet these requirements while keeping operations simple?

Options:

  • A. Require DBAs to use local hardware security modules (HSMs) on-premises to encrypt database backups before uploading them to Cloud Storage, leaving the Cloud SQL instance unmodified.

  • B. Use the default Google-managed encryption keys for Cloud SQL, because data is already encrypted at rest by Google Cloud without extra configuration.

  • C. Store a randomly generated encryption key in Secret Manager and have the application encrypt all data before writing it to Cloud SQL, leaving Cloud SQL configured with default encryption.

  • D. Create a key ring and key in Cloud KMS, assign the security team Cloud KMS admin roles, assign DBAs Cloud SQL admin roles, and configure the Cloud SQL instance to use a CMEK key from Cloud KMS.

Best answer: D

Explanation: Cloud KMS is Google Cloud’s managed key management service. When you use customer-managed encryption keys (CMEK) from Cloud KMS with a service like Cloud SQL, you retain control over the keys that protect your data at rest. You can rotate keys, disable them, or restrict their use via IAM, and you get auditable logs of key usage.

In regulated industries (such as finance or healthcare), it is common to require that the customer, not the cloud provider, manages the encryption keys, and that key administrators are a different group from system or database administrators. By configuring Cloud SQL to use a CMEK key from Cloud KMS, you meet these requirements without changing your application logic. The security team can manage the Cloud KMS key (create, rotate, disable), while DBAs manage Cloud SQL but only have permission for the service account to use the key, not administer it.

Google-managed keys are still secure, but they do not provide the same level of customer control, separation of duties, or detailed key-usage auditing that many compliance frameworks require. Alternative patterns like application-level encryption or external HSMs may have valid use cases, but they are not necessary here and often add complexity without directly meeting the stated regulatory requirement to use CMEK for the managed service.


Question 5

Topic: Ensuring Successful Operation of a Cloud Solution

You operate a Cloud Run service that reads objects from a Cloud Storage bucket using a dedicated service account. A Cloud Monitoring alert is configured to fire if more than 70% of requests to Cloud Storage fail within a 5‑minute window.

In the last 5 minutes, logs show 200 total requests from this service to the bucket. Of these, 150 requests failed with 403 PERMISSION_DENIED and 50 requests succeeded.

Assuming the alert threshold is evaluated only on these 200 requests, which statement is most accurate about this situation?

Options:

  • A. The alert should not fire because the error rate is below 70%, and the failures are most likely caused by an organization policy that disallows external IP addresses.

  • B. The alert should fire because the error rate is 75%, and the failures are most likely caused by VPC firewall or Cloud NGFW rules blocking egress to Cloud Storage endpoints.

  • C. The alert should not fire because the error rate is 65%, and the failures are most likely caused by an IAM permission issue on the service account or bucket.

  • D. The alert should fire because the error rate is 75%, and the failures are most likely caused by an IAM permission issue on the service account or bucket.

Best answer: D

Explanation: In this scenario, you must combine a simple percentage calculation with interpretation of a 403 PERMISSION_DENIED error.

First, calculate the failure rate. There are 150 failed requests and 200 total requests in the 5‑minute window. The failure percentage is

\[ \text{failure rate} = \frac{150}{200} \times 100\% = 75\%. \]

Because the alert condition is defined as “more than 70% of requests fail”, a 75% failure rate clearly exceeds the threshold, so the alert should fire.

Next, interpret the 403 PERMISSION_DENIED status. A 403 from Cloud Storage means the request reached the Cloud Storage API but the caller was not authorized to perform the requested operation. That typically indicates an IAM issue: for example, the Cloud Run service account does not have a role like roles/storage.objectViewer or a custom role with storage.objects.get on the bucket.

Network problems (VPC firewall, Cloud NGFW, Cloud NAT) and many organization policies usually result in connection failures, timeouts, or errors during resource creation or configuration, not a 403 PERMISSION_DENIED returned by the storage API.

Therefore, the most accurate statement is that the alert should fire (75% > 70%) and that the failures are most likely caused by an IAM permissions problem on the service account or bucket.


Question 6

Topic: Ensuring Successful Operation of a Cloud Solution

Your security team runs an external compliance scanner on your Google Cloud project. The tool flags several Cloud Storage buckets as “not encrypted” because no Cloud KMS keys are configured. The application accesses these buckets only over HTTPS. What is the most appropriate first response to address this finding?

Options:

  • A. Enable Object Versioning and a bucket retention policy so that previous encrypted versions of each object are retained for audit purposes.

  • B. Explain that Cloud Storage encrypts all data at rest by default with Google‑managed keys and that HTTPS provides encryption in transit, then share the documentation with the security team.

  • C. Move the data from Cloud Storage to local disks on Compute Engine VMs and enable full‑disk encryption on the VMs.

  • D. Immediately create a Cloud KMS key and re‑encrypt all existing objects in the buckets using customer‑managed encryption keys (CMEK).

Best answer: B

Explanation: Google Cloud encrypts customer data at rest by default using Google‑managed encryption keys for services such as Cloud Storage, Compute Engine persistent disks, and many others. This means that even if you do not configure Cloud KMS or CMEK, Google automatically encrypts data before it is written to disk and decrypts it when read.

In addition, when applications access Cloud Storage over HTTPS, data is protected in transit using TLS between the client and Google’s frontends. This combination of default encryption at rest and TLS in transit satisfies many baseline security and compliance requirements.

In this scenario, the external scanner flags buckets as “not encrypted” only because it is looking for explicit CMEK usage, not because data is truly stored in plaintext. The best first response is to clarify Google Cloud’s default encryption behavior, provide documentation to the security team or auditor, and then decide whether CMEK is required by specific regulatory or organizational policies.

Jumping straight to re‑encrypting all data with CMEK or moving data to other services introduces extra complexity and risk without first addressing the misunderstanding about default encryption at rest and TLS in transit.


Question 7

Topic: Ensuring Successful Operation of a Cloud Solution

You operate two Cloud Run services that write objects to the same Cloud Storage bucket via the JSON API. Each service uses a different service account. Users report new PERMISSION_DENIED errors after you deployed the second service. You aggregate 1 minute of request logs, grouped by service account:

Service accountRequests in 1 minPERMISSION_DENIED errors
web-sa@project-id.iam.gserviceaccount.com1800
batch-writer-sa@project-id.iam.gserviceaccount.com2020

Calculate the failure rate (as a percentage, round to whole percent) for each service account. Based on these percentages and the error type, what is the most appropriate next step to resolve the issue?

Options:

  • A. Grant the Storage Object Admin role to batch-writer-sa on the Cloud Storage bucket.

  • B. Open the VPC firewall to allow all egress traffic from both Cloud Run services.

  • C. Recreate both Cloud Run services in the same region as the Cloud Storage bucket.

  • D. Increase the Cloud Storage write quota for the project.

Best answer: A

Explanation: The table shows very different behaviors for the two service accounts. For web-sa, there are 180 requests and 0 PERMISSION_DENIED errors. The failure rate is

\[ \text{failure rate}_{\text{web-sa}} = \frac{0}{180} \times 100\% = 0\%. \]

For batch-writer-sa, there are 20 requests and 20 PERMISSION_DENIED errors. The failure rate is

\[ \text{failure rate}_{\text{batch-writer-sa}} = \frac{20}{20} \times 100\% = 100\%. \]

This pattern—0% failures for one service account and 100% failures for another, with the same API and bucket—strongly suggests that the second service account is not authorized to perform the operation. In Google Cloud, PERMISSION_DENIED usually indicates missing IAM roles or incorrect resource-level permissions for the principal (in this case, the service account).

Because web-sa works fine, the Cloud Storage API, bucket, network, and project configuration are all functioning. Only batch-writer-sa has problems. The most direct and appropriate fix is to grant batch-writer-sa an appropriate Cloud Storage IAM role (such as Storage Object Admin, or a more specific least-privilege role) on the target bucket so that its requests are authorized.

Other potential causes such as firewall rules, region mismatch, or quotas would affect both service accounts similarly, and would typically result in different error codes, not PERMISSION_DENIED. Therefore, IAM permissions on the bucket for batch-writer-sa are the likely root cause.


Question 8

Topic: Ensuring Successful Operation of a Cloud Solution

You manage a third-party CI/CD system that needs to deploy resources into a Google Cloud project. Your security team wants to minimize credential exposure by using short-lived service account credentials via Workload Identity Federation or the IAM Credentials API, and to avoid long-lived keys. Which TWO of the following actions should you AVOID? (Select TWO.)

Options:

  • A. Grant the CI workload identity the roles/iam.serviceAccountTokenCreator role on the deployment service account so it can obtain short-lived tokens without creating keys.

  • B. Use the IAM Credentials API from the CI jobs to generate short-lived access tokens for the deployment service account instead of persisting JSON keys.

  • C. Configure a Workload Identity Federation pool that trusts the CI provider’s OIDC tokens and allow it to impersonate a deployment service account, without creating any service account keys.

  • D. Create a service account JSON key and check it into a private Git repository used by the CI system so all jobs can reuse the same key.

  • E. Store a long-lived service account JSON key as an encrypted secret in the CI system and rotate it manually every 2–3 years.

Correct answers: D and E

Explanation: The scenario emphasizes reducing credential exposure by preferring short-lived service account credentials through Workload Identity Federation or the IAM Credentials API. These mechanisms allow external or non-Google Cloud workloads to obtain time-bound tokens without ever handling long-lived JSON keys.

Storing JSON keys—especially if they are reused broadly, committed to source control, or rotated infrequently—creates a persistent secret that can be exfiltrated. Once compromised, an attacker can impersonate the service account until the key is revoked.

By contrast, Workload Identity Federation and the IAM Credentials API are designed to avoid such long-lived secrets. They rely on an external identity (such as an OIDC token from a CI provider) and exchange that for short-lived Google Cloud access tokens. If something is compromised, the blast radius is limited by a short token lifetime and revocable trust configuration, instead of a multi-year static key.


Question 9

Topic: Ensuring Successful Operation of a Cloud Solution

A developer calls a Google Cloud API and receives an HTTP 403 PERMISSION_DENIED error stating that the caller does not have permission to perform the requested operation. Assuming the network path to the service is working, which type of control is the most likely cause of this error?

Options:

  • A. A VPC firewall rule blocking egress traffic from the caller’s subnet

  • B. A missing custom route to the Google API endpoint from the caller’s VPC

  • C. An IAM policy that does not grant the required role to the caller identity

  • D. A Cloud DNS configuration error preventing resolution of the API hostname

Best answer: C

Explanation: A 403 PERMISSION_DENIED error that explicitly states the caller does not have permission almost always indicates an IAM authorization failure. In Google Cloud, after a request reaches the service endpoint successfully, the service checks whether the caller’s identity has the required permission on the target resource based on IAM policies (and possibly organization policies). If the necessary permission is missing or denied, the service returns PERMISSION_DENIED.

Network controls such as VPC firewall rules, routes, or DNS configuration affect whether the request can reach the service at all. When those are misconfigured, the client typically sees connection errors, timeouts, or different status codes (for example, UNAVAILABLE), not an explicit IAM permission error. Therefore, when you see PERMISSION_DENIED plus a message about missing permission, you should first investigate IAM roles and policies on the resource and caller identity.


Question 10

Topic: Ensuring Successful Operation of a Cloud Solution

Which statement correctly describes the purpose of Google Cloud Armor in a typical Google Cloud architecture?

Options:

  • A. It provides web application firewall (WAF) and DDoS protection by enforcing security policies at the edge on HTTP(S) load balancers before traffic reaches backend services.

  • B. It is a VPN service that encrypts traffic between on-premises networks and Google Cloud VPC networks to protect data in transit.

  • C. It is a managed network firewall that replaces VPC firewall rules for all protocols and ports within a VPC network.

  • D. It is a host-based firewall agent installed on Compute Engine VMs to control inbound and outbound traffic at the operating system level.

Best answer: A

Explanation: Cloud Armor is Google Cloud’s web application firewall (WAF) and DDoS protection service. It tightly integrates with external HTTP(S) load balancers so that security policies are enforced at Google’s edge locations, close to users. This allows malicious or unwanted HTTP(S) traffic to be filtered, rate-limited, or blocked before it reaches backend services such as Compute Engine, GKE, or Cloud Run.

Because Cloud Armor operates at the load balancer edge, it is well-suited for defending internet-facing web applications against common web attacks and volumetric DDoS, while offloading that work from backend infrastructure.

Continue with full practice

Use the Google Cloud Associate Cloud Engineer Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try Google Cloud Associate Cloud Engineer on Web View Google Cloud Associate Cloud Engineer Practice Test

Free review resource

Read the Google Cloud Associate Cloud Engineer Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026