Free Google Cloud ACE Full-Length Practice Exam: 50 Questions

Try 50 free Google Cloud Associate Cloud Engineer questions across the exam domains, with explanations, then continue with full IT Mastery practice.

This free full-length Google Cloud Associate Cloud Engineer practice exam includes 50 original IT Mastery questions across the exam domains.

These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.

Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.

Need concept review first? Read the Google Cloud Associate Cloud Engineer Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try Google Cloud Associate Cloud Engineer on Web View full Google Cloud Associate Cloud Engineer practice page

Exam snapshot

  • Exam route: Google Cloud Associate Cloud Engineer
  • Practice-set question count: 50
  • Time limit: 120 minutes
  • Practice style: mixed-domain diagnostic run with answer explanations

Full-length exam mix

DomainWeight
Setting Up a Cloud Solution Environment10%
Planning and Configuring a Cloud Solution15%
Deploying and Implementing a Cloud Solution30%
Ensuring Successful Operation of a Cloud Solution25%
Configuring Access and Security20%

Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.

Practice questions

Questions 1-25

Question 1

Topic: Configuring Access and Security

You deployed a Cloud Run service that must read objects from a Cloud Storage bucket in the same project. All requests from the service are failing with 403 PERMISSION_DENIED in Cloud Logging. The user who deployed the service has the Storage Object Viewer role on the bucket. What should you do to resolve this while following least privilege?

Options:

  • A. Grant the Storage Object Viewer role on the bucket to the user who deployed the Cloud Run service at the project level.

  • B. Enable the Cloud Run and Cloud Storage APIs on the project that hosts the service and bucket.

  • C. Make the Cloud Storage bucket publicly readable so the Cloud Run service can access it without IAM.

  • D. Grant the Storage Object Viewer role on the bucket to the Cloud Run service’s runtime service account.

Best answer: D

Explanation: In Google Cloud IAM, each policy on a resource such as a Cloud Storage bucket consists of bindings, and each binding connects a role (like roles/storage.objectViewer) to one or more members (identities), such as users, groups, or service accounts. Access decisions are made based on whether the caller’s identity appears as a member in a binding that grants a role with the required permissions on that resource (or an ancestor).

Cloud Run services do not call other services as the human user who deployed them; instead, they run under a service account identity. When the Cloud Run service reads from Cloud Storage, Cloud Storage sees the request as coming from that service account. If the bucket’s IAM policy only grants access to the human user, the service account is not listed as a member in any binding with the necessary role, so Cloud Storage returns 403 PERMISSION_DENIED.

To fix this, you must update the bucket’s IAM policy so that one of its bindings grants an appropriate storage role (such as Storage Object Viewer) to the runtime service account for the Cloud Run service. This aligns with the IAM policy structure (role → members binding) and follows least privilege by granting only the required permissions to the correct identity.


Question 2

Topic: Configuring Access and Security

You run gsutil cp from a Compute Engine VM to read an object in a Cloud Storage bucket and receive this error:

AccessDeniedException: 403 user@example.com does not have
storage.objects.get access to the Google Cloud Storage object.

Based on this message, what is the most likely cause of the failure?

Options:

  • A. Private Google Access is disabled on the VM subnet, preventing access to Cloud Storage APIs.

  • B. A VPC firewall rule is blocking egress traffic from the VM to Cloud Storage.

  • C. An organization policy is preventing the VM from using an external IP address.

  • D. The user account is missing the required IAM permission on the bucket or object.

Best answer: D

Explanation: The error message is a 403 AccessDeniedException from Cloud Storage that explicitly states the user does not have storage.objects.get access. The storage.objects.get string is the name of an IAM permission. When the API is reachable but denies access with a 403 that lists a specific missing permission, the problem is almost always IAM.

If the issue were network-related (for example, firewall rules or Private Google Access), the request would usually fail to connect or resolve and would not reach the stage where Cloud Storage evaluates IAM and returns a detailed permission error. Similarly, if an organization policy were blocking a resource type or configuration, the error text would typically reference organization policy or a constraint name, not only a missing IAM permission.

Therefore, the correct troubleshooting focus is on granting the appropriate IAM role or permission on the Cloud Storage bucket or object to the user or group that is running the gsutil command.


Question 3

Topic: Ensuring Successful Operation of a Cloud Solution

You are designing encryption for two Cloud Storage buckets shown in the exhibit. Your company wants to minimize operational overhead but must meet all stated compliance requirements.

Based on the exhibit, how should you configure encryption for the pii-archive-bucket?

BucketData descriptionCompliance / business notes
internal-logs-bucketApplication debug logs, internal onlyNo external compliance requirements. Default Google Cloud security is considered sufficient.
pii-archive-bucketEncrypted PII for EU customersCustomer contract: “Customer must manage encryption keys and can revoke provider access.”

Options:

  • A. Configure pii-archive-bucket to use a Cloud KMS customer-managed encryption key (CMEK) and keep internal-logs-bucket on Google-managed keys.

  • B. Disable encryption for internal-logs-bucket and leave pii-archive-bucket on Google-managed keys to reduce costs while keeping some encryption in place.

  • C. Use a Google-managed encryption key for pii-archive-bucket because Google already encrypts all data at rest by default.

  • D. Configure both buckets to use Cloud KMS customer-managed encryption keys (CMEK) so that all data uses the same high-compliance standard.

Best answer: A

Explanation: The exhibit shows two buckets with different data classifications and compliance notes. The internal-logs-bucket contains internal debug logs with no external compliance requirements. For this bucket, the exhibit explicitly states that default Google Cloud security is sufficient, so Google-managed encryption keys are appropriate and impose the least operational overhead.

The pii-archive-bucket stores encrypted PII for EU customers and has a customer contract stating: “Customer must manage encryption keys and can revoke provider access.” This language indicates a need for customer-managed encryption keys, not provider-managed ones. In Google Cloud, this is implemented with Cloud KMS customer-managed encryption keys (CMEK) configured for the bucket.

Using CMEK only where the contract requires it (the PII bucket) satisfies compliance while avoiding unnecessary key management work for the internal logs bucket, which aligns with the requirement to minimize operational overhead.


Question 4

Topic: Setting Up a Cloud Solution Environment

A global company with 12,000 employees is adopting Google Cloud. Currently, IT staff manually create Cloud Identity users and add them to groups when HR sends weekly spreadsheets of hires and leavers. Leadership wants user and group provisioning to be automatic, near real-time when HR updates the corporate directory, and consistent across all Google Cloud projects with minimal custom scripting.

Which of the following actions/solutions will meet these requirements? (Select TWO.)

Options:

  • A. Deploy Cloud Identity Premium and configure Google Cloud Directory Sync (GCDS) to automatically synchronize users and groups from the on-premises LDAP/Active Directory into Cloud Identity on a scheduled basis.

  • B. Write a custom cron job that calls Admin SDK and gcloud commands to add or remove user-level IAM bindings in each project whenever HR exports a CSV of changes.

  • C. Create one shared Cloud Identity user account per department and grant each shared account the necessary IAM roles in all relevant projects, instead of managing individual users and groups.

  • D. Integrate the existing corporate IdP (for example, Okta or Azure AD) with Cloud Identity using SCIM-based automatic user and group provisioning so lifecycle changes in the IdP are reflected in Cloud Identity.

  • E. Have the helpdesk team manually create and disable Cloud Identity users and add them to IAM groups in the Google Cloud console whenever HR sends the weekly change list.

Correct answers: A and D

Explanation: For large organizations, Cloud Identity is typically integrated with an existing enterprise directory or identity provider so that user and group lifecycle is managed in one authoritative place and automatically reflected in Google Cloud. This avoids manual account creation, reduces errors, and ensures that leavers quickly lose access.

Google Cloud Directory Sync (GCDS) lets you synchronize users, groups, and group memberships from an on-premises LDAP/Active Directory environment into Cloud Identity or Google Workspace. Alternatively, many modern IdPs (such as Okta or Azure AD) support SCIM-based automatic provisioning, which can directly create, update, and deactivate users and groups in Cloud Identity.

Both approaches meet the scenario’s requirements: they scale to thousands of users, react quickly to HR-driven changes, and avoid custom scripts and manual console work. The incorrect options either rely on manual processes, custom automation around IAM bindings only, or unsafe shared accounts that do not satisfy identity and auditing requirements.


Question 5

Topic: Configuring Access and Security

You are designing encryption for two Cloud Storage buckets shown in the exhibit. Your company wants to minimize operational overhead but must meet all stated compliance requirements.

Based on the exhibit, how should you configure encryption for the pii-archive-bucket?

BucketData descriptionCompliance / business notes
internal-logs-bucketApplication debug logs, internal onlyNo external compliance requirements. Default Google Cloud security is considered sufficient.
pii-archive-bucketEncrypted PII for EU customersCustomer contract: “Customer must manage encryption keys and can revoke provider access.”

Options:

  • A. Configure pii-archive-bucket to use a Cloud KMS customer-managed encryption key (CMEK) and keep internal-logs-bucket on Google-managed keys.

  • B. Use a Google-managed encryption key for pii-archive-bucket because Google already encrypts all data at rest by default.

  • C. Disable encryption for internal-logs-bucket and leave pii-archive-bucket on Google-managed keys to reduce costs while keeping some encryption in place.

  • D. Configure both buckets to use Cloud KMS customer-managed encryption keys (CMEK) so that all data uses the same high-compliance standard.

Best answer: A

Explanation: The exhibit shows two buckets with different data classifications and compliance notes. The internal-logs-bucket contains internal debug logs with no external compliance requirements. For this bucket, the exhibit explicitly states that default Google Cloud security is sufficient, so Google-managed encryption keys are appropriate and impose the least operational overhead.

The pii-archive-bucket stores encrypted PII for EU customers and has a customer contract stating: “Customer must manage encryption keys and can revoke provider access.” This language indicates a need for customer-managed encryption keys, not provider-managed ones. In Google Cloud, this is implemented with Cloud KMS customer-managed encryption keys (CMEK) configured for the bucket.

Using CMEK only where the contract requires it (the PII bucket) satisfies compliance while avoiding unnecessary key management work for the internal logs bucket, which aligns with the requirement to minimize operational overhead.


Question 6

Topic: Ensuring Successful Operation of a Cloud Solution

Which TWO of the following statements about using Identity-Aware Proxy (IAP), Cloud VPN, and private/internal load balancers to provide secure access to internal applications are INCORRECT and should NOT guide your design decisions? (Select TWO.)

Options:

  • A. For external partners that should not have broad network connectivity, exposing an internal web application through IAP behind an external HTTPS load balancer is usually preferable to giving them full VPN access to the internal subnet.

  • B. An internal HTTP(S) load balancer exposes a private front end reachable only from the VPC and connected networks (such as over Cloud VPN or Cloud Interconnect) and relies on network-level controls instead of user identity to secure access.

  • C. Because IAP protects access at Google’s edge, it is acceptable for the backend service to listen only on HTTP without TLS and still treat the traffic as fully encrypted and compliant end-to-end.

  • D. Identity-Aware Proxy (IAP) adds an identity-aware layer in front of HTTP(S) or TCP services, using IAM policies to control which users can reach the application without granting them direct VPC network access.

  • E. Cloud VPN is an identity-aware solution that authenticates individual end users with IAM and enforces per-user application access without any additional components.

Correct answers: C and E

Explanation: This question compares three common ways to provide secure access to internal applications on Google Cloud: Identity-Aware Proxy (IAP), Cloud VPN, and internal (private) load balancers. IAP is designed for identity-aware, application-level access control, whereas Cloud VPN and internal load balancers are network-level mechanisms that control which networks and IP ranges can reach a service.

Cloud VPN extends a private network into Google Cloud or connects two private networks. It encrypts traffic in transit but typically provides broad IP-level reachability once connected and does not, by itself, enforce per-user IAM-based controls on each application. Internal load balancers expose services only on private IP addresses, so they are naturally paired with VPN or Interconnect for remote access, again relying on network segmentation and firewall/Cloud NGFW policies rather than identity.

IAP, in contrast, authenticates individual users and uses IAM policies to decide who can reach a specific web or TCP service. It is ideal for remote employees or partners who need access to particular applications, without granting them full network-level access. However, IAP does not eliminate the need for sound security on the backend, including TLS and appropriate controls if you require true end-to-end encryption and compliance.


Question 7

Topic: Planning and Configuring a Cloud Solution

Which TWO statements correctly describe when Knative Serving is an appropriate choice for running serverless-style workloads on a self-managed Kubernetes platform? (Select TWO.)

Options:

  • A. You already operate your own Kubernetes clusters (for example, on-premises) and want HTTP-based microservices that can autoscale down to zero when idle.

  • B. You want Google Cloud to manage all clusters, nodes, and autoscaling so that you do not have to administer any Kubernetes infrastructure.

  • C. You want a serverless-style deployment API that can run the same containerized applications across multiple clouds and your own data centers using a common abstraction.

  • D. Your main requirement is to run long-lived, non-HTTP batch jobs where there are no incoming requests and you only care about job completion, not request latency.

  • E. You need low-level control of the underlying virtual machines so you can install custom kernel modules and manage OS patches directly for your workloads.

Correct answers: A and C

Explanation: Knative Serving is an open-source component that runs on top of Kubernetes and adds serverless-style capabilities such as request-based autoscaling (including scale-to-zero), rapid scaling up, and simplified deployment abstractions for HTTP/gRPC workloads.

It is a good fit when you must operate your own Kubernetes clusters (for example, in your own data centers or on multiple clouds) but still want a serverless developer experience similar to managed platforms like Cloud Run. In that case, Knative Serving lets you keep control over cluster location, networking, and compliance while giving developers higher-level primitives for request-driven services.

It is not appropriate when your primary goal is to avoid all cluster management (where a fully managed serverless product is better) or when you are focused on non-request-driven, long-running batch workloads that do not benefit from request-based autoscaling and scale-to-zero.


Question 8

Topic: Setting Up a Cloud Solution Environment

Which TWO statements correctly describe how to use IAM basic, predefined, and custom roles for project access control? (Select TWO.)

Options:

  • A. Custom roles are required for every new project because basic and predefined roles cannot be granted at the project level.

  • B. Predefined roles are never changed by Google after they are introduced, which makes them safer than custom roles for long‑term compliance guarantees.

  • C. Predefined roles are created and maintained by Google Cloud for specific products or job functions and are usually the first choice when you want least‑privilege access for common tasks.

  • D. Basic roles can only be assigned at the organization level and cannot be used at the folder or project level for access control.

  • E. Basic roles (such as Viewer, Editor, Owner) are broad, legacy-style roles that grant many permissions across most services in a project and are not recommended for fine-grained production access control.

Correct answers: C and E

Explanation: IAM in Google Cloud offers three main types of roles: basic, predefined, and custom. Basic roles (Owner, Editor, Viewer) are broad, legacy roles that apply across almost all Google Cloud services in a project and typically grant far more permissions than needed. They are convenient for small test environments but are not recommended for fine‑grained production access.

Predefined roles are created and maintained by Google Cloud for specific services and job functions (for example, a role focused on managing Cloud Storage objects). They are the preferred starting point for least‑privilege access: if a predefined role fits the task, you should generally use it instead of a basic role.

Custom roles are optional, organization‑ or project‑scoped roles that you define when no single predefined role (or small combination of them) provides the exact set of permissions required. They give you fine control at the cost of extra management effort, and they remain under your control: permissions in a custom role change only when you update the role definition.

At the project level, you normally choose a predefined role that matches the user’s responsibilities; you reserve custom roles for exceptional cases and avoid basic roles for day‑to‑day production use because of their broad scope.


Question 9

Topic: Ensuring Successful Operation of a Cloud Solution

Which statement BEST describes how Google Cloud organization policies and IAM policies work together to enforce security posture?

Options:

  • A. Organization policies and IAM policies are the same type of document with different names; the only difference is that one is stored at the organization level and the other at the project level.

  • B. IAM policies restrict which Google Cloud services and regions can be used, and organization policies automatically grant roles to service accounts that need access.

  • C. Organization policies define configuration guardrails (constraints) across the resource hierarchy, while IAM policies grant specific principals permissions on resources within those guardrails.

  • D. Both organization policies and IAM policies are role bindings that directly grant permissions to principals, but at different levels of the hierarchy.

Best answer: C

Explanation: In Google Cloud, organization policies and IAM policies solve different but complementary problems.

Organization policies use constraints (such as allowed regions, whether external IPs are permitted, or which services can be used) to set guardrails that apply across many resources via the resource hierarchy. They define what configurations are allowed or disallowed in your environment.

IAM policies, by contrast, bind principals (users, groups, and service accounts) to roles, which are collections of permissions. IAM therefore defines who can do what on specific resources, but only within the limits that organization policies allow. If an organization policy forbids a certain action or configuration, IAM cannot grant an exception to it.

Together, organization policies set the high‑level security posture, and IAM policies control detailed access inside those guardrails.


Question 10

Topic: Ensuring Successful Operation of a Cloud Solution

You run gsutil cp from a Compute Engine VM to read an object in a Cloud Storage bucket and receive this error:

AccessDeniedException: 403 user@example.com does not have
storage.objects.get access to the Google Cloud Storage object.

Based on this message, what is the most likely cause of the failure?

Options:

  • A. A VPC firewall rule is blocking egress traffic from the VM to Cloud Storage.

  • B. The user account is missing the required IAM permission on the bucket or object.

  • C. An organization policy is preventing the VM from using an external IP address.

  • D. Private Google Access is disabled on the VM subnet, preventing access to Cloud Storage APIs.

Best answer: B

Explanation: The error message is a 403 AccessDeniedException from Cloud Storage that explicitly states the user does not have storage.objects.get access. The storage.objects.get string is the name of an IAM permission. When the API is reachable but denies access with a 403 that lists a specific missing permission, the problem is almost always IAM.

If the issue were network-related (for example, firewall rules or Private Google Access), the request would usually fail to connect or resolve and would not reach the stage where Cloud Storage evaluates IAM and returns a detailed permission error. Similarly, if an organization policy were blocking a resource type or configuration, the error text would typically reference organization policy or a constraint name, not only a missing IAM permission.

Therefore, the correct troubleshooting focus is on granting the appropriate IAM role or permission on the Cloud Storage bucket or object to the user or group that is running the gsutil command.


Question 11

Topic: Deploying and Implementing a Cloud Solution

Which TWO statements about Cloud Logging log buckets and retention are correct? (Select TWO.)

Options:

  • A. Cloud Logging always stores logs in a single global bucket per project, so you cannot separate logs into multiple buckets for different retention needs.

  • B. To reduce storage costs, you must export logs to Cloud Storage and delete them there, because Cloud Logging buckets cannot be configured with short retention periods.

  • C. Retention settings are applied per individual log entry using log‑based metrics, not at the log bucket level.

  • D. You can create custom log buckets in a project and configure a different retention period for each bucket to meet varying compliance and cost requirements.

  • E. The _Required log bucket has a fixed retention period that cannot be changed or disabled, ensuring a baseline level of retained audit logs.

Correct answers: D and E

Explanation: Cloud Logging uses log buckets to store log entries. Each bucket has its own retention period, which controls how long log entries are kept before automatic deletion. This design allows you to align logging behavior with different compliance and cost requirements by using multiple buckets and routing rules.

There are special system buckets like _Default and _Required, plus any number of custom log buckets you create. Custom buckets and _Default have configurable retention periods, while _Required has a fixed, non‑configurable retention to ensure key audit logs are always retained for a minimum time.

Retention is always defined at the bucket level, not per individual log entry or log-based metric. To optimize cost, you typically adjust bucket retention and optionally exclude or filter out unnecessary logs, rather than being forced to export and manually delete them elsewhere.


Question 12

Topic: Deploying and Implementing a Cloud Solution

You are standardizing log exports in a new Google Cloud project. You want to follow recommended practices when configuring Cloud Logging log buckets, the log router, and log sinks. Which of the following configurations is NOT recommended?

Options:

  • A. Configure a project-level log bucket with 30-day retention for high-volume debug logs, and a log sink that routes only error-severity logs to a Cloud Storage bucket for long-term archival.

  • B. Create a log sink that routes VPC Flow Logs to a Pub/Sub topic using a dedicated service account, and subscribe an external SIEM to that topic over a secure integration.

  • C. Create an organization-level log sink that exports all log entries, including Admin Activity and Data Access audit logs from every project, to a single Cloud Storage bucket with public read access so that anyone in the company can analyze them.

  • D. Use the log router to create a sink that exports only security-related logs (for example, logName and resource.type filters) from a project to a BigQuery dataset in the same region for analysis.

Best answer: C

Explanation: Cloud Logging uses log buckets to store logs and the log router (sinks) to export selected logs to supported destinations like Cloud Storage, BigQuery, or Pub/Sub. When configuring log sinks, you should follow least-privilege and defense-in-depth principles: restrict who can read exported logs, minimize the scope of exported data to what is needed, and avoid making log archives publicly accessible.

Audit logs (Admin Activity and Data Access) can contain sensitive information about users, resources, and operations. Exporting these logs to a publicly readable Cloud Storage bucket is a clear security anti-pattern, even if the intent is to make analysis easier. Instead, access to logs and exported data should be controlled via IAM, and destinations should be private.

Other options describe common, recommended patterns: using different log buckets and retention for high-volume logs, exporting error logs for archival, routing flow logs via Pub/Sub to a SIEM, and using BigQuery for security log analysis in the same region.


Question 13

Topic: Deploying and Implementing a Cloud Solution

You manage a regional GKE Autopilot cluster named web-cluster in project my-prod in us-central1. You must quickly review the cluster inventory (nodes, Pods, and Services) from Cloud Shell, without installing any tools on your laptop, and by using the default kubeconfig that Cloud Shell maintains. What should you do to meet these requirements?

Options:

  • A. In Cloud Shell, run gcloud container clusters get-credentials web-cluster --region us-central1 --project my-prod, then run kubectl get nodes,pods,svc --all-namespaces.

  • B. In Cloud Shell, run gcloud compute instances list and gcloud compute instances describe to see all nodes and their running applications.

  • C. Install the gcloud CLI and kubectl on your laptop, download the cluster’s kubeconfig from the console, then run kubectl get pods and kubectl get svc.

  • D. In the Google Cloud console, open Kubernetes Engine and review the Workloads and Services pages instead of using kubectl.

Best answer: A

Explanation: To inspect a GKE cluster’s inventory with kubectl, you need a kubeconfig entry that points to the cluster and valid credentials. In Google Cloud, the standard way to populate kubeconfig for a GKE cluster is to run gcloud container clusters get-credentials, which retrieves cluster endpoint and credentials and writes them into the active kubeconfig file.

Cloud Shell already has the gcloud CLI and kubectl preinstalled, so you do not need to install anything locally. Once kubeconfig is configured, basic kubectl get commands such as kubectl get nodes, kubectl get pods, and kubectl get svc display the inventory of nodes, Pods, and Services. Adding --all-namespaces lets you see resources from every namespace in the cluster.

This question focuses on the correct operational workflow for viewing GKE cluster inventory using kubeconfig and simple kubectl get commands from Cloud Shell, aligning with day‑to‑day cluster management at the Associate Cloud Engineer level.


Question 14

Topic: Deploying and Implementing a Cloud Solution

You manage a custom-mode VPC network and need to add a new subnet for workloads in us-east1. The new subnet’s primary IP range must not overlap with any existing subnets in the VPC.

Exhibit: Existing subnets in the VPC

Subnet nameRegionPrimary IP range
subnet-aus-central110.0.0.0/24
subnet-bus-central110.0.1.0/24
subnet-ceurope-west110.0.2.0/24

Which primary IP range should you use for the new us-east1 subnet so it does not overlap with any existing subnet ranges?

Options:

  • A. 10.0.2.0/23

  • B. 10.0.1.128/25

  • C. 10.0.3.0/24

  • D. 10.0.0.0/16

Best answer: C

Explanation: When adding a new subnet to a VPC, its primary IP CIDR range must not overlap any existing subnet ranges in that VPC. Overlap means that any IP address in the new range is also present in another subnet’s range.

From the exhibit, the existing ranges are:

  • subnet-a: 10.0.0.0/24
  • subnet-b: 10.0.1.0/24
  • subnet-c: 10.0.2.0/24

Each of these /24s covers 256 addresses and they are contiguous blocks: 10.0.0.x, 10.0.1.x, and 10.0.2.x. A non-overlapping new /24 can therefore start at 10.0.3.0, which is outside all listed subnets. This makes 10.0.3.0/24 the only choice that does not share any address space with the existing ranges.

The other options either fully contain or are contained by one of the existing ranges, or partially overlap them, which violates the requirement that subnet IP ranges in a VPC must be non-overlapping.


Question 15

Topic: Configuring Access and Security

Which TWO of the following statements about using Identity-Aware Proxy (IAP), Cloud VPN, and private/internal load balancers to provide secure access to internal applications are INCORRECT and should NOT guide your design decisions? (Select TWO.)

Options:

  • A. Because IAP protects access at Google’s edge, it is acceptable for the backend service to listen only on HTTP without TLS and still treat the traffic as fully encrypted and compliant end-to-end.

  • B. Cloud VPN is an identity-aware solution that authenticates individual end users with IAM and enforces per-user application access without any additional components.

  • C. Identity-Aware Proxy (IAP) adds an identity-aware layer in front of HTTP(S) or TCP services, using IAM policies to control which users can reach the application without granting them direct VPC network access.

  • D. For external partners that should not have broad network connectivity, exposing an internal web application through IAP behind an external HTTPS load balancer is usually preferable to giving them full VPN access to the internal subnet.

  • E. An internal HTTP(S) load balancer exposes a private front end reachable only from the VPC and connected networks (such as over Cloud VPN or Cloud Interconnect) and relies on network-level controls instead of user identity to secure access.

Correct answers: A and B

Explanation: This question compares three common ways to provide secure access to internal applications on Google Cloud: Identity-Aware Proxy (IAP), Cloud VPN, and internal (private) load balancers. IAP is designed for identity-aware, application-level access control, whereas Cloud VPN and internal load balancers are network-level mechanisms that control which networks and IP ranges can reach a service.

Cloud VPN extends a private network into Google Cloud or connects two private networks. It encrypts traffic in transit but typically provides broad IP-level reachability once connected and does not, by itself, enforce per-user IAM-based controls on each application. Internal load balancers expose services only on private IP addresses, so they are naturally paired with VPN or Interconnect for remote access, again relying on network segmentation and firewall/Cloud NGFW policies rather than identity.

IAP, in contrast, authenticates individual users and uses IAM policies to decide who can reach a specific web or TCP service. It is ideal for remote employees or partners who need access to particular applications, without granting them full network-level access. However, IAP does not eliminate the need for sound security on the backend, including TLS and appropriate controls if you require true end-to-end encryption and compliance.


Question 16

Topic: Configuring Access and Security

Which statement best describes the primary reason to use Cloud KMS with customer‑managed encryption keys (CMEK) instead of the default Google‑managed encryption keys for a Google Cloud service?

Options:

  • A. To avoid configuring IAM roles for security by relying on key permissions instead of resource‑level access controls

  • B. To meet regulatory or internal compliance requirements by controlling your own encryption keys and enforcing separation of duties between data owners and key administrators

  • C. To improve performance and reduce latency of disk and database encryption compared to Google‑managed encryption keys

  • D. To enable encryption at rest for services that would otherwise store data in plaintext on Google Cloud

Best answer: B

Explanation: Cloud Key Management Service (Cloud KMS) is Google Cloud’s managed service for creating, storing, and controlling cryptographic keys. Many Google Cloud services support customer‑managed encryption keys (CMEK), which lets you use keys you manage in Cloud KMS instead of the default Google‑managed keys.

In most cases, services are already encrypted at rest using Google‑managed keys, so you do not need to do anything to get basic encryption. CMEK becomes important when regulations, contracts, or internal security policies require you to control key lifecycle (creation, rotation, disabling, and destruction) and to separate key administration from data or project administration. This separation of duties allows, for example, a security team to control keys in one project while an application team manages the data in another project.

Using CMEK does not primarily change performance, nor does it replace IAM policies on the underlying resources. It adds an extra layer of control and auditability over the keys used for encryption, which is why it is often required in regulated industries such as finance, healthcare, or government.


Question 17

Topic: Setting Up a Cloud Solution Environment

You created a new Google Cloud project and deployed a Compute Engine VM from a custom Linux image, a GKE Autopilot cluster, and a Cloud Run service. In Google Cloud Observability, you see logs and metrics for GKE and Cloud Run, but the VM does not appear. You want a single, project-wide logging and monitoring setup using Google-managed services with minimal operational overhead. What should you do to optimize the setup?

Options:

  • A. Install and enable the Ops Agent on the Compute Engine VM so it sends system logs and metrics to Cloud Logging and Cloud Monitoring for the project.

  • B. Assign an external IP to the VM and enable serial port access so Google Cloud Observability can scrape metrics from the VM remotely.

  • C. Create a separate monitoring project and move the VM, GKE cluster, and Cloud Run service into that project so they share a single metrics scope.

  • D. Create a log sink that exports all project logs to BigQuery and build custom dashboards directly from the exported data.

Best answer: A

Explanation: Google Cloud Observability (Cloud Logging and Cloud Monitoring) automatically integrates with managed services like GKE Autopilot and Cloud Run. Once the appropriate APIs are enabled on the project, these services send logs and metrics without extra agents.

Compute Engine VMs, however, typically require an agent (now the Ops Agent) to collect detailed system metrics (CPU, memory, disk) and OS-level logs. If you build a VM from a custom image that does not already include the Ops Agent, the VM will not appear in Cloud Monitoring resource lists, and you will not see its logs in Cloud Logging.

In this scenario, observability already works for GKE Autopilot and Cloud Run, which implies that project-level APIs for Cloud Logging and Cloud Monitoring are enabled. The missing piece is the VM’s local collection agent. The lowest-effort, Google-managed solution is to install and enable the Ops Agent on that VM so it can send logs and metrics into the existing project-wide observability setup.

Other options that involve exporting to BigQuery, creating a separate monitoring project, or exposing the VM with an external IP do not address the root cause: the VM is not collecting or sending observability data in the first place.


Question 18

Topic: Deploying and Implementing a Cloud Solution

You created a log bucket and a Log Router sink in project app-prod to export all logs to an existing BigQuery dataset logs_dataset in project analytics-prod. The sink shows a status of OK, the BigQuery API is enabled, but after 30 minutes no tables or rows appear in logs_dataset. What is the most appropriate action to fix this issue while keeping the current sink and dataset?

Options:

  • A. Grant the sink’s writer identity the BigQuery Data Editor role on the logs_dataset dataset in analytics-prod.

  • B. Change the sink to use a more specific inclusion filter that only matches resource.type="gce_instance" logs.

  • C. Enable BigQuery Data Access audit logs on the analytics-prod project so that Logging can create tables in the dataset.

  • D. Recreate the sink as an organization-level sink instead of a project-level sink so it can reach the analytics-prod project.

Best answer: A

Explanation: Cloud Logging uses the Log Router to send logs from log buckets to destinations such as BigQuery. Each sink has a writer identity (a service account) that must have appropriate IAM permissions on the destination.

In this scenario, the project-level sink is configured and healthy, and the BigQuery API is enabled, but the dataset remains empty. This strongly suggests an IAM issue on the destination dataset rather than a configuration or API problem.

For BigQuery exports, the sink’s writer identity needs permission to create tables and insert rows in the target dataset, typically via the BigQuery Data Editor role on that dataset. Once this role is granted to the sink’s writer identity in the analytics-prod project, Cloud Logging can create the required partitioned tables and start writing exported logs.

Changing filters, recreating the sink at another scope, or enabling additional audit logs does not fix the fundamental permission problem that prevents data from being written to BigQuery.


Question 19

Topic: Ensuring Successful Operation of a Cloud Solution

Which TWO of the following statements about IAM policy inheritance and the Google Cloud resource hierarchy are INCORRECT? (Select TWO.)

Options:

  • A. IAM policies are inherited: a role binding created at the organization level automatically applies to all folders, projects, and resources under that organization.

  • B. Organization policies are used to stop IAM role inheritance from higher levels by overriding IAM policies on child resources.

  • C. If a user is removed from an IAM binding at the folder level, they also automatically lose any roles that were granted directly on child projects.

  • D. Granting a role to a Google Group at the folder level is a way to give consistent access to multiple projects while limiting that access to a subset of the organization.

  • E. To reduce the risk of overly broad access, you should generally avoid assigning basic roles (Owner, Editor, Viewer) at the organization level.

  • F. IAM permissions are additive across the hierarchy: a principal’s effective permissions are the union of all roles they receive at the organization, folder, project, and resource levels.

Correct answers: B and C

Explanation: In Google Cloud, resources are organized in a hierarchy: organization → folders → projects → resources. IAM policies attach to nodes in this hierarchy and are inherited by all descendants. If you grant a role at the organization level, every folder, project, and resource under that organization sees that binding in its effective permissions, in addition to any more specific bindings set lower in the tree.

IAM is additive: a principal’s effective permissions are the union of all roles they receive at all levels. Removing or changing an IAM binding at one level does not automatically remove independent bindings at other levels. This is why broad grants at high levels (such as basic roles at the organization) are risky—they affect many resources and are hard to reason about.

To avoid accidentally broadening access, you should:

  • Prefer narrowly scoped predefined roles instead of basic roles.
  • Grant roles at the lowest level that still meets requirements (often folder, project, or even resource level).
  • Use Google Groups at folder or project level to manage shared access across multiple projects in a controlled part of the hierarchy.

Organization policies are a separate mechanism: they define constraints (such as allowed regions or preventing external IPs) and are also inherited down the hierarchy, but they do not manage IAM role grants or override IAM inheritance.


Question 20

Topic: Planning and Configuring a Cloud Solution

Your team is building a new global event ingestion pipeline on Google Cloud to process clickstream data from web and mobile apps. You need to fan out events to multiple independent consumers (real-time analytics, ETL jobs) with the following requirements:

  • Fully managed service with minimal operational overhead and automatic scaling.
  • Native IAM-based access control and easy integration with other Google Cloud services such as Dataflow and BigQuery.
  • No existing Kafka ecosystem, and the team prefers not to manage clusters or brokers.

Which option is the most appropriate design for the messaging layer?

Options:

  • A. Write clickstream events directly to Cloud Storage buckets per application and have each consumer periodically read new objects.

  • B. Use Pub/Sub with a topic per event type and multiple subscriptions for each consumer application.

  • C. Use Google Cloud Managed Service for Apache Kafka and have all producers and consumers use Kafka client libraries.

  • D. Deploy a self-managed Apache Kafka cluster on GKE Autopilot and use it as the central message bus.

Best answer: B

Explanation: This scenario is about choosing the right managed messaging service on Google Cloud for a new streaming ingestion pipeline. The key requirements are: fully managed, minimal operational overhead, automatic scaling, IAM-based access control, easy integration with Google Cloud services, and no pre-existing Kafka environment.

Pub/Sub is Google Cloud’s fully managed, horizontally scalable messaging service designed exactly for these use cases. It provides publish/subscribe semantics, supports very high throughput, and offers native integrations with Dataflow, BigQuery, and Cloud Functions/Cloud Run functions. IAM roles control who can publish and subscribe, aligning with the requirement for simple, Cloud-native access control.

By contrast, both self-managed Kafka on GKE and Managed Service for Apache Kafka are more appropriate when you already have Kafka-based applications or need Kafka-specific APIs. They introduce additional operational and conceptual complexity that is unnecessary for a greenfield Google Cloud deployment that can use Pub/Sub directly.

Cloud Storage is excellent for durable object storage and batch processing but is not a messaging system. Using it as a queue or pub/sub system would lead to higher latency, polling overhead, and lack of immediate fan-out semantics, so it does not meet the real-time messaging requirement.


Question 21

Topic: Planning and Configuring a Cloud Solution

Your company runs a critical internal web application on a single Compute Engine VM. During a recent planned host maintenance event, the VM was stopped and the application became unavailable. You review the VM details shown below.

name: web-app-vm
zone: us-central1-a
machineType: n2-standard-4
scheduling:
  automaticRestart: false
  onHostMaintenance: TERMINATE
  preemptible: false

You want to minimize downtime during future maintenance events without changing the VM family or size. Based on the configuration shown, what should you do?

Options:

  • A. Convert the VM to a preemptible instance so Google Cloud automatically recreates it after interruptions.

  • B. Disable automatic restart so that the VM does not reboot unexpectedly after maintenance completes.

  • C. Increase the machine type to n2-standard-8 so the VM can handle maintenance events without restarting.

  • D. Update the VM scheduling policy to enable automatic restart and set on-host maintenance behavior to MIGRATE.

Best answer: D

Explanation: The exhibit shows the VM’s scheduling configuration, which controls its behavior during host maintenance and unexpected failures. Currently, onHostMaintenance is set to TERMINATE, and automaticRestart is set to false. With these settings, when Google Cloud performs maintenance on the underlying host, the VM is stopped instead of live-migrated, and it is not automatically restarted afterward. This leads directly to downtime.

To minimize downtime for a critical application, you should configure the VM to use live migration where possible and to restart automatically after unexpected failures. Setting onHostMaintenance to MIGRATE tells Compute Engine to live-migrate the VM to another host during maintenance, keeping the VM running. Enabling automaticRestart ensures that if the VM is terminated due to a non-user-initiated event (such as a crash), it will automatically restart, further improving availability.

The other options either increase the likelihood of interruptions (preemptible instances), focus on resources unrelated to maintenance behavior (changing machine type), or maintain the same problematic restart behavior (keeping automatic restart disabled).


Question 22

Topic: Deploying and Implementing a Cloud Solution

Which TWO statements about tools for querying BigQuery datasets are correct? (Select TWO.)

Options:

  • A. BigQuery requires you to install a local database server first; query tools then connect to that local server instead of Google-managed endpoints.

  • B. The only supported way to automate recurring BigQuery queries is by running the bq command-line tool on a cron job.

  • C. To execute SQL queries on BigQuery tables, you must use the Google Cloud console; client libraries cannot send queries directly to BigQuery.

  • D. The BigQuery interface in the Google Cloud console lets you run SQL queries, preview table data, and save query results into new tables.

  • E. The bq command-line tool can run queries and manage BigQuery resources, and it can be scripted from environments like Cloud Shell or CI/CD systems.

Correct answers: D and E

Explanation: BigQuery provides several ways to run queries and manage datasets, aimed at both interactive users and automated workloads.

The Google Cloud console includes a BigQuery interface with a SQL editor where you can write and run queries, preview table data, and use UI flows to save query results into new or existing tables. This is ideal for ad‑hoc analysis and visual exploration.

The bq command-line tool, part of the Cloud SDK, lets you run queries, load and export data, and administer datasets and tables from scripts or terminals. It works well in Cloud Shell, local terminals, and CI/CD pipelines.

In addition, client libraries for many programming languages (such as Python, Java, Go, and others) allow applications to execute queries directly against BigQuery’s managed service endpoints. BigQuery is serverless, so no local database server needs to be installed, and there are multiple supported ways to schedule or automate recurring queries beyond cron plus bq.


Question 23

Topic: Planning and Configuring a Cloud Solution

An engineer replaces a set of manual rsync scripts with a recurring Storage Transfer Service job that runs every night and copies only new and changed objects from an on-premises file server to a Cloud Storage bucket. No operator action is needed once the job is created. Which cloud principle does this configuration best support?

Options:

  • A. High availability by replicating data across multiple regions

  • B. Operational excellence and automation by using a managed, scheduled transfer

  • C. Cost optimization by minimizing storage class charges

  • D. Security and least privilege by reducing human access to data

Best answer: B

Explanation: Storage Transfer Service is a managed Google Cloud service that can perform one-time or recurring transfers from on-premises systems, other clouds, or Cloud Storage buckets. When an engineer configures a recurring job that runs on a schedule and copies only new or changed data, they are turning a manual, error-prone task into an automated, repeatable process.

This behavior is primarily about operational excellence and automation. Instead of maintaining custom scripts and remembering to run them, the team relies on a managed service with a built-in scheduler, error handling, and logging. That reduces toil, increases consistency, and makes the data movement process more reliable without additional human effort.

Although such a setup can have secondary benefits for cost, security, or resilience, the clearest, most direct principle demonstrated by replacing manual scripts with a scheduled Storage Transfer Service job is improved operations through automation.


Question 24

Topic: Planning and Configuring a Cloud Solution

You are rightsizing a Compute Engine VM for an in-memory caching service. Load tests show the service needs at least 4 vCPUs and 40GB of RAM. For internal cost estimates, your team uses this formula:

Monthly cost = (number of vCPUs × $5) + (GB of RAM × $1).

Assume all of the following configurations are supported in your project. Which configuration meets the requirements at the lowest estimated monthly cost?

Options:

  • A. Custom machine type with 6 vCPUs and 48GB RAM

  • B. Predefined e2-standard-16 (16 vCPUs, 64GB RAM)

  • C. Custom machine type with 4 vCPUs and 40GB RAM

  • D. Predefined e2-highmem-8 (8 vCPUs, 64GB RAM)

Best answer: C

Explanation: This question focuses on rightsizing Compute Engine VMs with custom machine types to avoid overprovisioning and reduce cost. The scenario gives a simple pricing formula so you can compare several candidate configurations.

The service needs at least 4 vCPUs and 40GB of RAM. Any configuration that has 4 or more vCPUs and 40GB or more RAM meets the performance requirement. Once all options meet the requirement, the best choice is the one with the lowest estimated monthly cost according to the given formula.

Using the formula monthly cost = (vCPUs × $5) + (GB RAM × $1):

  • 16 vCPUs, 64GB RAM: 16×$5 + 64×$1 = $80 + $64 = $144.
  • 8 vCPUs, 64GB RAM: 8×$5 + 64×$1 = $40 + $64 = $104.
  • 4 vCPUs, 40GB RAM: 4×$5 + 40×$1 = $20 + $40 = $60.
  • 6 vCPUs, 48GB RAM: 6×$5 + 48×$1 = $30 + $48 = $78.

The custom machine with 4 vCPUs and 40GB RAM is the lowest-cost configuration that still satisfies the requirements. This illustrates why using custom machine types to match actual CPU and memory needs is an effective way to control costs in Google Cloud.


Question 25

Topic: Configuring Access and Security

Which statement BEST describes how Google Cloud organization policies and IAM policies work together to enforce security posture?

Options:

  • A. Organization policies define configuration guardrails (constraints) across the resource hierarchy, while IAM policies grant specific principals permissions on resources within those guardrails.

  • B. Organization policies and IAM policies are the same type of document with different names; the only difference is that one is stored at the organization level and the other at the project level.

  • C. IAM policies restrict which Google Cloud services and regions can be used, and organization policies automatically grant roles to service accounts that need access.

  • D. Both organization policies and IAM policies are role bindings that directly grant permissions to principals, but at different levels of the hierarchy.

Best answer: A

Explanation: In Google Cloud, organization policies and IAM policies solve different but complementary problems.

Organization policies use constraints (such as allowed regions, whether external IPs are permitted, or which services can be used) to set guardrails that apply across many resources via the resource hierarchy. They define what configurations are allowed or disallowed in your environment.

IAM policies, by contrast, bind principals (users, groups, and service accounts) to roles, which are collections of permissions. IAM therefore defines who can do what on specific resources, but only within the limits that organization policies allow. If an organization policy forbids a certain action or configuration, IAM cannot grant an exception to it.

Together, organization policies set the high‑level security posture, and IAM policies control detailed access inside those guardrails.


Questions 26-50

Question 26

Topic: Setting Up a Cloud Solution Environment

You are deploying a new Cloud Run service for an internal EU-only application. Your other workloads use the europe-west10 region, but when you run gcloud run deploy --region=europe-west10 ..., you receive INVALID_ARGUMENT: The requested location is not supported. What should you do to resolve this while meeting the EU data residency requirement?

Options:

  • A. Enable the Cloud Run Admin API in the project and rerun the deployment using the same europe-west10 region.

  • B. Open a Cloud Support case to request enabling Cloud Run in the europe-west10 region for your project.

  • C. Check the Cloud Run locations documentation or Console region selector, then redeploy the service to a Cloud Run–supported region in Europe that satisfies the data residency requirement.

  • D. Create a new VPC subnet in europe-west10 and redeploy the Cloud Run service using that subnet to satisfy the location requirement.

Best answer: C

Explanation: Cloud Run is a regional service, and each Google Cloud product is only available in certain regions. When you see an error like The requested location is not supported, it indicates that the product simply does not operate in that region.

For Domain 1 tasks, a key responsibility is to confirm that a given Google Cloud product is available in the target region before finalizing deployment plans. Google publishes an up‑to‑date locations matrix in the product documentation and exposes the same information through region selectors in the Console. The correct remediation is therefore to consult this authoritative information and choose a region that both supports the product and meets any compliance or data residency requirements.

In this scenario, the engineer wants EU-only data handling, so the solution is to pick a supported Cloud Run region within Europe (such as europe-west1 or another supported EU region), based on the locations documentation or Console region picker, and redeploy there. Changing quotas, networking, or opening a support case cannot make an unsupported region start hosting Cloud Run.


Question 27

Topic: Deploying and Implementing a Cloud Solution

In Google Cloud Logging, which description BEST defines a log bucket and its primary purpose?

Options:

  • A. A BigQuery dataset automatically created to store log-based metrics and support SQL queries over metric time series.

  • B. A regional container managed by Cloud Logging that stores log entries and applies a configured retention period to those logs.

  • C. A Cloud Storage bucket that receives exported logs so they can be archived outside Cloud Logging.

  • D. A project-level configuration object that controls which resources are allowed to write logs but does not store any log data.

Best answer: B

Explanation: In Google Cloud Logging, a log bucket is the fundamental storage unit where log entries are stored. Each log bucket exists in a specific region and has its own configurable retention period. By creating multiple log buckets with different retention settings, you can separate logs by purpose (for example, security vs application logs) and keep each set of logs for as long as compliance requires, without overpaying to store all logs for the same duration.

Log buckets are managed entirely within Cloud Logging; they are different from Cloud Storage buckets or BigQuery datasets that you might use as export targets via log sinks. IAM permissions determine who can read or write to log buckets, but the bucket’s core role is to store logs and enforce retention, not just to define access policies.


Question 28

Topic: Ensuring Successful Operation of a Cloud Solution

You deployed a Cloud Run service that must read objects from a Cloud Storage bucket in the same project. All requests from the service are failing with 403 PERMISSION_DENIED in Cloud Logging. The user who deployed the service has the Storage Object Viewer role on the bucket. What should you do to resolve this while following least privilege?

Options:

  • A. Enable the Cloud Run and Cloud Storage APIs on the project that hosts the service and bucket.

  • B. Grant the Storage Object Viewer role on the bucket to the Cloud Run service’s runtime service account.

  • C. Grant the Storage Object Viewer role on the bucket to the user who deployed the Cloud Run service at the project level.

  • D. Make the Cloud Storage bucket publicly readable so the Cloud Run service can access it without IAM.

Best answer: B

Explanation: In Google Cloud IAM, each policy on a resource such as a Cloud Storage bucket consists of bindings, and each binding connects a role (like roles/storage.objectViewer) to one or more members (identities), such as users, groups, or service accounts. Access decisions are made based on whether the caller’s identity appears as a member in a binding that grants a role with the required permissions on that resource (or an ancestor).

Cloud Run services do not call other services as the human user who deployed them; instead, they run under a service account identity. When the Cloud Run service reads from Cloud Storage, Cloud Storage sees the request as coming from that service account. If the bucket’s IAM policy only grants access to the human user, the service account is not listed as a member in any binding with the necessary role, so Cloud Storage returns 403 PERMISSION_DENIED.

To fix this, you must update the bucket’s IAM policy so that one of its bindings grants an appropriate storage role (such as Storage Object Viewer) to the runtime service account for the Cloud Run service. This aligns with the IAM policy structure (role → members binding) and follows least privilege by granting only the required permissions to the correct identity.


Question 29

Topic: Deploying and Implementing a Cloud Solution

Which TWO of the following statements about configuring IAM on Cloud Storage buckets with uniform bucket-level access are INCORRECT? (Select TWO.)

Options:

  • A. When uniform bucket-level access is enabled on a bucket, IAM policies are the only mechanism used to control access to objects in that bucket.

  • B. A service account used by an application can be granted the Storage Object Viewer role on a specific bucket so the application can read objects without using end-user credentials.

  • C. To follow least privilege, you should generally grant predefined roles such as Storage Object Viewer or Storage Object Admin at the bucket level to groups, instead of using broad project-wide Owner or Editor roles.

  • D. Granting the Storage Admin role at the project level to a user who only needs read access to a single bucket is an acceptable way to simplify permission management.

  • E. With uniform bucket-level access enabled, you should configure per-object ACLs to give different users access to specific objects without changing the bucket IAM policy.

Correct answers: D and E

Explanation: Uniform bucket-level access in Cloud Storage disables object ACLs and makes IAM the sole control plane for access to both buckets and objects. At the same time, least privilege and proper scoping are critical: you should avoid broad basic roles like Owner or Editor for storage access and instead use the predefined Storage IAM roles (for example, Storage Object Viewer, Storage Object Creator, Storage Object Admin) at the smallest practical scope, usually the bucket. Applications typically use service accounts with these roles, rather than user accounts or ACLs, to access data securely and audibly.

The incorrect statements in this question either recommend over-broad permissions (granting Storage Admin at the project level for a narrow need) or suggest using object ACLs under uniform bucket-level access (which is not possible and contradicts the feature’s purpose). The remaining statements correctly describe how IAM and uniform bucket-level access work and how to apply roles to users, groups, and service accounts in a least-privilege way.


Question 30

Topic: Deploying and Implementing a Cloud Solution

Your organization has enabled OS Login for all Compute Engine instances in a project. You are documenting how team members should obtain SSH access. Which statement about OS Login and SSH access is INCORRECT and should be removed from the documentation?

Options:

  • A. If a user’s IAM access (for example, compute.osLogin) is revoked, their ability to SSH using OS Login is removed, because their SSH authorization depends on their IAM roles.

  • B. With OS Login enabled, you generally do not need to manually create local Linux user accounts for each engineer, because OS Login automatically manages Linux users based on their IAM roles.

  • C. To grant a user SSH access with OS Login, you only need to add the user’s public SSH key to the VM’s ~/.ssh/authorized_keys file; IAM roles are not required.

  • D. Granting a user the compute.osLogin or compute.osAdminLogin role on the project allows OS Login to map their Google identity to a Linux account and permit SSH, subject to any other access controls.

Best answer: C

Explanation: OS Login centralizes SSH access control for Compute Engine by tying Linux user accounts and SSH authorization directly to IAM identities and roles. When OS Login is enabled on a VM or project, Google Cloud uses IAM permissions (for example, compute.osLogin or compute.osAdminLogin) to determine who can log in and with what privileges. OS Login then automatically creates and manages matching Linux user accounts on the instance.

Because OS Login sits in front of the instance’s SSH daemon, manually editing ~/.ssh/authorized_keys or managing local Unix users is no longer the primary way to grant or revoke access. Instead, you grant or remove IAM roles on the project, folder, or instance. This improves security, auditability, and consistency across instances.

The incorrect statement is the one that claims you only need to add a user’s SSH key to authorized_keys and that IAM roles are not required. That ignores how OS Login works and undermines its central benefits of IAM-based access control and automatic Linux user mapping.


Question 31

Topic: Deploying and Implementing a Cloud Solution

You manage a small web application running on a single Compute Engine VM. Developers currently inspect logs only in Cloud Logging. The security team now needs to run ad-hoc SQL queries over the last 6 months of HTTP request logs and join them with existing datasets in Google Cloud, with minimal operational overhead. What is the most appropriate way to improve your logging design?

Options:

  • A. Export logs to Pub/Sub and build a Dataflow pipeline that writes them into a Cloud SQL database for reporting.

  • B. Create a log sink that exports the HTTP request logs to a time-partitioned BigQuery dataset in the same project.

  • C. Create a log sink that exports logs to a Cloud Storage bucket and have analysts download and analyze the files locally.

  • D. Install the Ops Agent on the VM and forward logs to an external SIEM, then run all log reports in the external tool.

Best answer: B

Explanation: This scenario focuses on choosing the right export destination for Cloud Logging logs to support analytics. Cloud Logging stores logs and allows basic querying, but for deeper, SQL-based analytics and correlation with other datasets, you typically export logs. BigQuery is Google Cloud’s managed data warehouse designed for large-scale analytics using SQL, and it integrates well with other Google Cloud datasets. Creating a log sink from Cloud Logging to a time-partitioned BigQuery dataset allows the security team to run ad-hoc SQL queries across months of HTTP logs, filter by time, and join with other tables, while Google Cloud automatically handles ingestion and scaling. This approach keeps operations simple: once the sink is configured, data flows automatically without the need to manage extra pipelines or external tools. Other destinations like Cloud Storage, Pub/Sub, and external SIEMs have valid uses, but they either lack native SQL analytics, overcomplicate the architecture, or fail to meet the requirement to work directly with existing Google Cloud datasets.


Question 32

Topic: Configuring Access and Security

Which TWO of the following statements about Google Cloud service account naming and documentation are INCORRECT or unsafe? (Select TWO.)

Options:

  • A. Because service accounts are non-human identities, you should assign them broad basic roles like “Editor” or “Owner” so they can perform any operation a workload might ever need.

  • B. Service account descriptions are useful metadata fields where you can record the service owner, the application using the account, and a short summary of its responsibilities.

  • C. It is recommended to reuse one highly privileged service account for all production services in a project to simplify access management.

  • D. Including the workload name and environment (for example, “payments-api-prod”) in the service account display name helps others quickly understand its purpose.

  • E. You should prefer workload-specific service accounts instead of sharing a single service account across many unrelated applications.

Correct answers: A and C

Explanation: Service accounts in Google Cloud represent non-human identities used by applications and services. Managing them safely means following least privilege and making their purpose clear with good names and descriptions. You should create separate service accounts per workload or trust boundary, grant only the roles each workload needs, and use display names and descriptions that explain what the account is for and who owns it.

Sharing one powerful service account across many services or granting overly broad basic roles like Owner or Editor are common anti-patterns. These practices increase blast radius if credentials are compromised, make it harder to reason about permissions, and complicate auditing and incident response. In contrast, clear naming and documentation help teams understand and manage service accounts effectively over time.


Question 33

Topic: Ensuring Successful Operation of a Cloud Solution

Which TWO of the following statements about Google Cloud service account naming and documentation are INCORRECT or unsafe? (Select TWO.)

Options:

  • A. It is recommended to reuse one highly privileged service account for all production services in a project to simplify access management.

  • B. Service account descriptions are useful metadata fields where you can record the service owner, the application using the account, and a short summary of its responsibilities.

  • C. Because service accounts are non-human identities, you should assign them broad basic roles like “Editor” or “Owner” so they can perform any operation a workload might ever need.

  • D. You should prefer workload-specific service accounts instead of sharing a single service account across many unrelated applications.

  • E. Including the workload name and environment (for example, “payments-api-prod”) in the service account display name helps others quickly understand its purpose.

Correct answers: A and C

Explanation: Service accounts in Google Cloud represent non-human identities used by applications and services. Managing them safely means following least privilege and making their purpose clear with good names and descriptions. You should create separate service accounts per workload or trust boundary, grant only the roles each workload needs, and use display names and descriptions that explain what the account is for and who owns it.

Sharing one powerful service account across many services or granting overly broad basic roles like Owner or Editor are common anti-patterns. These practices increase blast radius if credentials are compromised, make it harder to reason about permissions, and complicate auditing and incident response. In contrast, clear naming and documentation help teams understand and manage service accounts effectively over time.


Question 34

Topic: Ensuring Successful Operation of a Cloud Solution

An organization updates its deployment standards so that all workloads on Google Cloud must authenticate using their attached service accounts or Workload Identity instead of embedding API keys or JSON key files in code or images. Which principle does this change primarily support?

Options:

  • A. Improving high availability by enabling seamless failover between regions

  • B. Improving security by using least-privilege, managed identities instead of long-lived static credentials

  • C. Optimizing costs by reducing the number of service accounts that are billed

  • D. Increasing scalability by allowing workloads to scale horizontally without configuration changes

Best answer: B

Explanation: The scenario describes replacing embedded API keys or JSON key files with service accounts and Workload Identity. These are Google Cloud mechanisms for managed, short‑lived credentials tied to identities that can have precise IAM roles.

This change reduces the risk of key leakage, simplifies secret rotation, and makes it easier to apply least privilege. Rather than sharing a static key with broad permissions, each workload uses its own service account or federated identity with only the permissions it needs. That is fundamentally a security and least‑privilege identity improvement, not a cost, scalability, or availability optimization.

At ACE level, you should recognize that choosing service accounts, Workload Identity, or Workload Identity Federation instead of long‑lived keys is a direct application of the security best practice: use managed identities and avoid embedded credentials.


Question 35

Topic: Planning and Configuring a Cloud Solution

Which TWO of the following statements about Google Cloud load balancing are INCORRECT? (Select TWO.)

Options:

  • A. All Google Cloud load balancers are global resources that can automatically send traffic to backends in any region without creating regional load balancers.

  • B. Pass-through TCP/UDP network load balancers operate at Layer7 and are typically used for content-based routing of HTTP requests.

  • C. Proxy-based load balancers terminate client TCP or HTTP connections at the load balancer and then open new connections from the load balancer to the backends.

  • D. Global external HTTP(S) load balancers operate at Layer7 and use a single global anycast IP address to route traffic to backends in multiple regions.

  • E. Regional internal HTTP(S) load balancers operate at Layer7, are scoped to a single region, and use internal IP addresses to serve traffic within a VPC or connected networks.

Correct answers: A and B

Explanation: This question tests understanding of Google Cloud load balancer types, especially global vs regional scope and Layer4 vs Layer7 behavior, plus proxy vs pass-through models.

Some Google Cloud load balancers, such as the global external HTTP(S) load balancer, provide a single anycast IP and can route traffic to backends in multiple regions at Layer7. Others, like internal HTTP(S) and internal TCP/UDP load balancers, are regional resources that serve traffic within a region and connected networks.

Proxy-based load balancers terminate client connections at the load balancer and then connect to backends, while pass-through (network) load balancers operate at Layer4, forwarding packets to backends without inspecting HTTP content. Content-based routing is a Layer7 (HTTP) capability, provided by HTTP(S) load balancers, not by pass-through TCP/UDP network load balancers.


Question 36

Topic: Planning and Configuring a Cloud Solution

Which TWO statements accurately describe how Compute Engine managed instance groups (MIGs) and instance templates work together to provide scalable, homogeneous VM fleets? (Select TWO.)

Options:

  • A. Editing an existing instance template automatically reconfigures all VMs that were previously created from that template in any MIGs.

  • B. A managed instance group uses a single instance template so that all VMs in the group are created with the same machine type, boot image, and metadata.

  • C. To roll out configuration changes (such as a new machine type or boot image) to all VMs in a MIG, you typically create a new instance template and then update the MIG to use it.

  • D. Each VM in a managed instance group can select its own instance template, allowing a mix of different configurations within the same group.

  • E. A managed instance group vertically scales by increasing CPU and memory on existing VMs according to the settings in its instance template when load increases.

Correct answers: B and C

Explanation: Managed instance groups (MIGs) and instance templates work together to let you run and manage large fleets of identical VM instances.

An instance template captures the VM configuration you want to standardize, such as machine type, boot disk image, labels, metadata, and service account. Instance templates are immutable definitions; once created, they do not change and do not retroactively alter existing VMs.

A managed instance group references a single instance template and uses it to create and maintain a homogeneous set of VMs. Because all instances are created from the same template, they share the same configuration, which simplifies deployment, scaling, and updates.

When you need to change the configuration (for example, a new image version, a different machine type, or updated metadata), you create a new instance template and update the MIG to use it. The MIG can then roll out new instances based on the new template, often with a rolling update strategy that gradually replaces old instances with new ones while keeping the service available.

MIGs scale horizontally by adjusting the number of VM instances, often using autoscaling policies based on metrics such as CPU utilization or load balancing capacity. They do not perform vertical scaling on existing VMs (changing CPU/RAM in place).


Question 37

Topic: Ensuring Successful Operation of a Cloud Solution

Your team runs a single Compute Engine VM in a production project. All developers currently have the Owner role on the project. The security lead reminds you of the shared responsibility model: Google secures the underlying infrastructure; your team must control access using IAM. You want to improve security with minimal operational impact. What should you do next?

Options:

  • A. Enable Cloud Logging and Cloud Monitoring on the VM so that Google can automatically block any unauthorized configuration changes to the project.

  • B. Create a single service account with the Owner role, generate a long-lived key for it, and share the key with all developers instead of using individual user accounts.

  • C. Open a Google Cloud Support case requesting that Google centrally restrict which project resources your developers can modify, because the provider is responsible for access control.

  • D. Use Google Groups to organize developers and admins, grant each group only the necessary predefined IAM roles (such as Viewer or Compute Admin) at the project level, and remove the Owner role from most users.

Best answer: D

Explanation: On Google Cloud, the shared responsibility model means Google secures the underlying infrastructure (physical security, hardware, foundational networking, and many managed service internals), while customers are responsible for securing how their own resources are accessed and configured. IAM is the core mechanism customers use to implement that responsibility.

In this scenario, giving all developers the Owner role is a clear violation of least privilege. Improving security in line with the shared responsibility model means tightening IAM so that people only have the access they need, ideally via Google Groups for easier management over time. That directly addresses the customer side of the model without changing the workload architecture.

Using groups with predefined IAM roles such as Viewer, Editor, and Compute Admin is the recommended approach for small teams: you keep simple administration, align with the model, and meaningfully reduce risk by removing broad Owner rights from most users.


Question 38

Topic: Configuring Access and Security

A security team wants a support group to have read access to objects in a Cloud Storage bucket only during business hours and only for objects whose names start with support/. Which Google Cloud access control feature does this requirement primarily describe?

Options:

  • A. Assigning the Storage Object Viewer basic role at the project level

  • B. Using IAM Conditions to grant context-aware access based on request attributes such as time and resource name

  • C. Creating a VPC Service Controls perimeter around the project containing the bucket

  • D. Configuring Cloud Storage object ACLs for each object in the bucket

Best answer: B

Explanation: The requirement is to allow a specific group access to Cloud Storage objects only under certain conditions: during business hours and only for objects whose names start with a particular prefix. This goes beyond simple role assignment and needs context-aware access control.

IAM Conditions extend IAM policies so that a role binding applies only when certain attributes match, such as the request time, resource name, or other contextual information. This lets you enforce rules like “members of group X can read objects only between 09:00 and 17:00, and only when the object name matches a certain pattern.” This is exactly what the scenario describes.

Other mechanisms like project-level roles, ACLs, or VPC Service Controls control who can access resources and from where, but they do not support fine-grained, attribute-based rules like time windows and resource-name prefixes in a single, maintainable IAM policy. IAM Conditions are the dedicated Google Cloud concept for this kind of context-aware access control.


Question 39

Topic: Setting Up a Cloud Solution Environment

Your finance team needs detailed Cloud Billing data retained for at least 10 years and regularly pulled into an on-premises accounting system that can import CSV files over SFTP. They do not plan to run interactive SQL analysis in Google Cloud. When configuring Cloud Billing exports, which destination is most appropriate to meet this requirement?

Options:

  • A. Download monthly billing reports as PDFs from the Cloud Console and manually upload them to the on-premises accounting system for storage and processing.

  • B. Configure Cloud Billing export to write CSV or JSON files into a Cloud Storage bucket in the appropriate region, and have the accounting system fetch the objects from there.

  • C. Configure Cloud Billing export to a BigQuery dataset, and have the accounting system connect via a BigQuery JDBC driver to run queries and download results.

  • D. Set up a Cloud Billing export stream directly to a Pub/Sub topic and subscribe to it from the on-premises accounting system to capture events in real time.

Best answer: B

Explanation: Cloud Billing supports exporting detailed billing data either to BigQuery or to Cloud Storage. The key distinction is how you intend to use that data.

BigQuery exports are ideal when you want to analyze billing data directly in Google Cloud using SQL, dashboards, and joins with other datasets. You pay for BigQuery storage and any queries you run. This is powerful for cost analysis but is not primarily an archival mechanism.

Cloud Storage exports write billing data as objects (CSV or JSON files) into a bucket. Cloud Storage is designed for low-cost, durable, long-term storage and makes it easy for external systems to download and process files without needing access to BigQuery. This aligns directly with requirements for long retention and integration with existing file-based workflows in external financial systems.

In the scenario, the finance team wants 10-year retention and an on-premises accounting system that imports CSV files. They explicitly do not need interactive SQL analysis in Google Cloud. Therefore, exporting Cloud Billing data to Cloud Storage is the most appropriate choice because it provides durable, cost-effective archival and straightforward external access via file downloads.


Question 40

Topic: Deploying and Implementing a Cloud Solution

Which description best matches the purpose of Database Center in Google Cloud?

Options:

  • A. A general-purpose monitoring service that collects metrics, logs, and traces for any Google Cloud resource or application.

  • B. A console experience that provides a fleet-level view of health, performance, and recommendations for managed databases (such as Cloud SQL and AlloyDB) across projects.

  • C. A fully managed, serverless data warehouse for running petabyte-scale SQL analytics workloads.

  • D. A per-instance monitoring page that shows metrics and logs only for a single Cloud SQL instance within one project.

Best answer: B

Explanation: Database Center is a specialized part of the Google Cloud console that focuses on managed databases such as Cloud SQL and AlloyDB. Its key value is to give engineers a fleet-level overview: you can see the health, performance, configuration status, and recommendations for many databases at once, often spanning multiple projects.

This is different from looking at a single database instance page or using general-purpose monitoring tools. Database Center aggregates information and surfaces database-specific insights, like which instances are overutilized, underutilized, misconfigured for high availability, or missing backups. This helps an operations team quickly identify and prioritize issues across their database estate, which aligns with the ACE domain of ensuring successful operation of cloud solutions.


Question 41

Topic: Deploying and Implementing a Cloud Solution

You have a Cloud DNS private managed zone named corp.gcp that contains an A record for api.corp.gcp. The zone is associated only with the dev-vpc network. A VM in dev-vpc can resolve api.corp.gcp, but a VM in prod-vpc gets NXDOMAIN. The hostname must remain internal-only, and you want a scalable Google Cloud–based fix.

What should you do?

Options:

  • A. Associate the prod-vpc VPC network with the existing corp.gcp private managed zone.

  • B. Configure Cloud NAT for prod-vpc so VMs can send DNS queries to Cloud DNS over the internet.

  • C. Create a new public managed zone for corp.gcp and copy the api.corp.gcp record into it.

  • D. Add a static entry for api.corp.gcp in /etc/hosts on all VMs in prod-vpc.

Best answer: A

Explanation: Cloud DNS private managed zones provide internal name resolution for VPC networks they are explicitly attached to. When a VM in a VPC queries a name, the Google Cloud metadata server acts as the recursive resolver and consults any private zones associated with that VPC. If no matching private zone is attached, and there is no corresponding public zone, the VM will receive NXDOMAIN.

In this scenario, the private zone corp.gcp is only associated with dev-vpc, so VMs in dev-vpc can resolve api.corp.gcp but VMs in prod-vpc cannot. To fix resolution for prod-vpc while keeping the name internal-only and using a scalable, managed approach, you should attach the prod-vpc VPC network to the existing private zone. This lets both VPCs use the same internal DNS records without exposing the hostname publicly or relying on manual host file edits.


Question 42

Topic: Planning and Configuring a Cloud Solution

An engineer changes the outbound traffic of a latency-tolerant batch application from Premium Network Service Tier to Standard Network Service Tier to reduce egress costs. Which primary design principle does this change most directly support?

Options:

  • A. Cost optimization by trading some network performance for lower egress charges

  • B. Performance optimization by reducing network latency and jitter

  • C. Security hardening by keeping traffic entirely on private Google-managed links

  • D. High availability and reliability by using more redundant network paths

Best answer: A

Explanation: Google Cloud Network Service Tiers let you choose between Premium and Standard paths for traffic. Premium Tier keeps traffic on Google’s global backbone network for as long as possible, typically providing lower latency, more consistent performance, and often improved reliability, but at a higher price. Standard Tier uses more of the public internet, usually at lower cost but with less predictable performance.

In the scenario, the application is explicitly described as latency-tolerant batch and the engineer moves from Premium to Standard to reduce egress costs. That means they are intentionally accepting potentially lower or less consistent performance in exchange for lower network charges. This is a classic cost optimization decision: aligning the service tier with the workload’s actual performance needs to avoid overpaying for capacity or quality the workload does not require.

The other principles (performance optimization, reliability/high availability, and security hardening) would all typically push toward Premium Tier rather than away from it. Since the change is about spending less on networking for a tolerant workload, it most directly reflects cost optimization.


Question 43

Topic: Deploying and Implementing a Cloud Solution

Your team hosts a public web application on a single Compute Engine VM. After a previous restart, the VM’s public IP changed and broke the DNS A record. You captured the current network details for the VM:

Exhibit:

FieldValue
Nameweb-frontend
Zoneus-central1-a
External IP203.0.113.10
IP typeEphemeral

You want the public IP to remain the same across future restarts or even VM recreations. Based on the exhibit, what should you do?

Options:

  • A. Change the VM to use only an internal IP and place it behind an internal TCP/UDP load balancer.

  • B. Reserve a new static external IP address in the same region and assign it to the VM’s network interface.

  • C. Keep the ephemeral IP and configure a startup script to automatically update the DNS record after each reboot.

  • D. Remove the external IP from the VM and configure Cloud NAT so the VM can still be reached from the internet.

Best answer: B

Explanation: The exhibit shows that the VM web-frontend currently uses an external IP with type Ephemeral. Ephemeral external IPs are dynamically assigned and can change whenever the VM is stopped and started, or if the VM is recreated. This behavior caused the DNS record to break when the IP changed after a previous restart.

To ensure a public IP remains stable across restarts and even VM recreations, you should use a static external IP address. Static external IPs are reserved to your project at the regional or global scope (depending on type) and remain associated with your project until you explicitly release them. When you assign a static IP to the VM’s network interface, the VM will keep that same public address across restarts and can even be reattached to a replacement VM.

Therefore, the best solution is to reserve a static external IP in the same region as the VM and attach it to the VM’s NIC, replacing the current ephemeral address. This directly addresses the stated problem of IP changes affecting DNS, by making the IP persistent at the infrastructure layer rather than relying on scripts or workarounds.


Question 44

Topic: Configuring Access and Security

You are configuring a GKE Autopilot cluster so a workload can call Cloud Storage using Workload Identity. You will map a Kubernetes service account to a Google Cloud service account and then test that the pod can list objects in a bucket. Which TWO actions should you AVOID while setting this up and verifying access? (Select TWO.)

Options:

  • A. Grant the project-level Owner role to the Google Cloud service account so that no permission issues interfere with your access test.

  • B. Grant the principal serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/app-sa] the roles/iam.workloadIdentityUser role on the Google Cloud service account.

  • C. Annotate the Kubernetes service account that the pods use with the email address of the Google Cloud service account to establish the identity mapping.

  • D. Create a long-lived JSON key for the Google Cloud service account, store it in a Kubernetes Secret, and configure the pod to authenticate using that key instead of relying on Workload Identity.

  • E. Create a Google Cloud service account for the workload and grant it only the specific IAM roles needed to call the required APIs.

Correct answers: A and D

Explanation: Workload Identity is the recommended way for GKE workloads to access Google Cloud APIs without using long-lived service account keys. It works by mapping a Kubernetes service account (KSA) to a Google Cloud service account (GSA), then allowing the KSA identity to impersonate the GSA. Pods running under that KSA automatically obtain short-lived credentials for the mapped GSA.

Good configuration steps include creating a dedicated GSA with least-privilege IAM roles for the target APIs, granting the KSA’s Workload Identity principal the roles/iam.workloadIdentityUser role on the GSA, and annotating the KSA with the GSA email. After that, you can run a pod using the KSA and verify that calls to the required APIs succeed.

Anti-patterns include bypassing Workload Identity by using long-lived service account keys and granting overly broad roles such as project Owner to a workload service account. Both increase security risk and violate Google Cloud best practices for IAM and key management.


Question 45

Topic: Configuring Access and Security

An organization updates its deployment standards so that all workloads on Google Cloud must authenticate using their attached service accounts or Workload Identity instead of embedding API keys or JSON key files in code or images. Which principle does this change primarily support?

Options:

  • A. Improving high availability by enabling seamless failover between regions

  • B. Increasing scalability by allowing workloads to scale horizontally without configuration changes

  • C. Improving security by using least-privilege, managed identities instead of long-lived static credentials

  • D. Optimizing costs by reducing the number of service accounts that are billed

Best answer: C

Explanation: The scenario describes replacing embedded API keys or JSON key files with service accounts and Workload Identity. These are Google Cloud mechanisms for managed, short‑lived credentials tied to identities that can have precise IAM roles.

This change reduces the risk of key leakage, simplifies secret rotation, and makes it easier to apply least privilege. Rather than sharing a static key with broad permissions, each workload uses its own service account or federated identity with only the permissions it needs. That is fundamentally a security and least‑privilege identity improvement, not a cost, scalability, or availability optimization.

At ACE level, you should recognize that choosing service accounts, Workload Identity, or Workload Identity Federation instead of long‑lived keys is a direct application of the security best practice: use managed identities and avoid embedded credentials.


Question 46

Topic: Ensuring Successful Operation of a Cloud Solution

You are configuring a GKE Autopilot cluster so a workload can call Cloud Storage using Workload Identity. You will map a Kubernetes service account to a Google Cloud service account and then test that the pod can list objects in a bucket. Which TWO actions should you AVOID while setting this up and verifying access? (Select TWO.)

Options:

  • A. Grant the project-level Owner role to the Google Cloud service account so that no permission issues interfere with your access test.

  • B. Create a Google Cloud service account for the workload and grant it only the specific IAM roles needed to call the required APIs.

  • C. Grant the principal serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/app-sa] the roles/iam.workloadIdentityUser role on the Google Cloud service account.

  • D. Create a long-lived JSON key for the Google Cloud service account, store it in a Kubernetes Secret, and configure the pod to authenticate using that key instead of relying on Workload Identity.

  • E. Annotate the Kubernetes service account that the pods use with the email address of the Google Cloud service account to establish the identity mapping.

Correct answers: A and D

Explanation: Workload Identity is the recommended way for GKE workloads to access Google Cloud APIs without using long-lived service account keys. It works by mapping a Kubernetes service account (KSA) to a Google Cloud service account (GSA), then allowing the KSA identity to impersonate the GSA. Pods running under that KSA automatically obtain short-lived credentials for the mapped GSA.

Good configuration steps include creating a dedicated GSA with least-privilege IAM roles for the target APIs, granting the KSA’s Workload Identity principal the roles/iam.workloadIdentityUser role on the GSA, and annotating the KSA with the GSA email. After that, you can run a pod using the KSA and verify that calls to the required APIs succeed.

Anti-patterns include bypassing Workload Identity by using long-lived service account keys and granting overly broad roles such as project Owner to a workload service account. Both increase security risk and violate Google Cloud best practices for IAM and key management.


Question 47

Topic: Ensuring Successful Operation of a Cloud Solution

A security team wants a support group to have read access to objects in a Cloud Storage bucket only during business hours and only for objects whose names start with support/. Which Google Cloud access control feature does this requirement primarily describe?

Options:

  • A. Configuring Cloud Storage object ACLs for each object in the bucket

  • B. Using IAM Conditions to grant context-aware access based on request attributes such as time and resource name

  • C. Assigning the Storage Object Viewer basic role at the project level

  • D. Creating a VPC Service Controls perimeter around the project containing the bucket

Best answer: B

Explanation: The requirement is to allow a specific group access to Cloud Storage objects only under certain conditions: during business hours and only for objects whose names start with a particular prefix. This goes beyond simple role assignment and needs context-aware access control.

IAM Conditions extend IAM policies so that a role binding applies only when certain attributes match, such as the request time, resource name, or other contextual information. This lets you enforce rules like “members of group X can read objects only between 09:00 and 17:00, and only when the object name matches a certain pattern.” This is exactly what the scenario describes.

Other mechanisms like project-level roles, ACLs, or VPC Service Controls control who can access resources and from where, but they do not support fine-grained, attribute-based rules like time windows and resource-name prefixes in a single, maintainable IAM policy. IAM Conditions are the dedicated Google Cloud concept for this kind of context-aware access control.


Question 48

Topic: Deploying and Implementing a Cloud Solution

You are configuring a Cloud Monitoring alerting policy to notify your team when a Compute Engine VM’s CPU utilization stays high. You want an alert when CPU utilization is above 80% for at least 5 minutes, but during a stress test the CPU chart shows over 90% for 10 minutes and no alert is triggered.

Review the alert policy configuration:

FieldValue
Policy nameHigh CPU on web-servers
Resource type filterVM Instance (gce_instance)
MetricCPU utilization
ConditionTrigger if CPU utilization is below 80%
Duration5 minutes
Notification channelEmail: oncall@example.com
Policy statusEnabled

Based only on this configuration, which change will most likely make the alert trigger as intended?

Options:

  • A. Lower the threshold from 80% to 70% while still triggering when CPU utilization is below the threshold.

  • B. Change the notification channel from email to SMS so alerts are more visible to the on-call engineer.

  • C. Change the condition to trigger when CPU utilization is above 80% instead of below 80%.

  • D. Shorten the duration from 5 minutes to 1 minute so alerts fire more quickly.

Best answer: C

Explanation: Cloud Monitoring alerting policies evaluate metric time series against a condition, such as a threshold, over a specified duration. Incidents are created only when the condition evaluates to true; then notifications are sent using the configured channels.

In this scenario, the requirement is to alert when CPU utilization is above 80% for at least 5 minutes. The exhibit shows that the policy’s Condition is configured to trigger when CPU utilization is below 80%. That is the opposite of what is needed: during a stress test, CPU goes over 90%, so the metric never satisfies the condition of being below 80%, and therefore no incident is created.

Fixing this requires changing the comparison direction so that the policy evaluates high CPU correctly. Adjusting thresholds, durations, or notification channels without fixing the comparison would not cause the alert to trigger during periods of high CPU, because the core logic would still be inverted.


Question 49

Topic: Deploying and Implementing a Cloud Solution

Your team runs a latency-sensitive Cloud Run service that calls a Cloud SQL database with a 200-connection limit. Traffic is usually low at night but can spike to 800 requests/second during promotions. You are reviewing Cloud Run autoscaling settings. Which TWO configuration changes should you AVOID? (Select TWO.)

Options:

  • A. Configure max instances so that, assuming roughly one database connection per instance, the total possible instances stays below 200.

  • B. Set max instances to 1,000 while keeping concurrency at 1, regardless of the database’s 200-connection limit.

  • C. Use a moderate max-instances value together with connection pooling so that bursts are smoothed without creating thousands of direct database connections.

  • D. Set min instances to a small non-zero value, such as 2, to keep a few containers warm for typical daytime traffic.

  • E. Set min instances to 100 at all times to eliminate cold starts, even though overnight traffic is close to zero.

  • F. Increase request concurrency from 2 to 20 after confirming in load tests that the service is stateless and still meets latency objectives.

Correct answers: B and E

Explanation: Cloud Run autoscaling is controlled mainly by three knobs: min instances, max instances, and request concurrency. Min instances keeps a baseline number of containers warm; this reduces cold starts but also creates a fixed cost because those containers run even when there is little or no traffic. Max instances limits how far the service can scale out, which is important for protecting downstream systems like databases that have connection limits. Request concurrency controls how many simultaneous requests each container can handle; higher concurrency generally reduces the number of containers needed but only works well if the code is stateless and can handle the extra load without violating latency goals.

In this scenario, the service depends on a Cloud SQL database with a 200-connection limit and has low traffic at night. Setting an excessively high min instances value wastes money by keeping many idle containers running. Likewise, setting max instances very high with concurrency of 1 can create far more potential instances than the database can safely handle, risking overload and errors. Reasonable values for min instances, max instances, and concurrency should be chosen based on traffic patterns, latency needs, and backend capacity, typically verified by load testing.


Question 50

Topic: Ensuring Successful Operation of a Cloud Solution

Which statement best describes the primary reason to use Cloud KMS with customer‑managed encryption keys (CMEK) instead of the default Google‑managed encryption keys for a Google Cloud service?

Options:

  • A. To meet regulatory or internal compliance requirements by controlling your own encryption keys and enforcing separation of duties between data owners and key administrators

  • B. To avoid configuring IAM roles for security by relying on key permissions instead of resource‑level access controls

  • C. To improve performance and reduce latency of disk and database encryption compared to Google‑managed encryption keys

  • D. To enable encryption at rest for services that would otherwise store data in plaintext on Google Cloud

Best answer: A

Explanation: Cloud Key Management Service (Cloud KMS) is Google Cloud’s managed service for creating, storing, and controlling cryptographic keys. Many Google Cloud services support customer‑managed encryption keys (CMEK), which lets you use keys you manage in Cloud KMS instead of the default Google‑managed keys.

In most cases, services are already encrypted at rest using Google‑managed keys, so you do not need to do anything to get basic encryption. CMEK becomes important when regulations, contracts, or internal security policies require you to control key lifecycle (creation, rotation, disabling, and destruction) and to separate key administration from data or project administration. This separation of duties allows, for example, a security team to control keys in one project while an application team manages the data in another project.

Using CMEK does not primarily change performance, nor does it replace IAM policies on the underlying resources. It adds an extra layer of control and auditability over the keys used for encryption, which is why it is often required in regulated industries such as finance, healthcare, or government.


Continue with full practice

Use the Google Cloud Associate Cloud Engineer Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try Google Cloud Associate Cloud Engineer on Web View Google Cloud Associate Cloud Engineer Practice Test

Focused topic pages

Free review resource

Read the Google Cloud Associate Cloud Engineer Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.

Revised on Thursday, May 14, 2026