Try 10 focused Google Cloud Associate Cloud Engineer questions on Deploying Cloud Solutions, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try Google Cloud Associate Cloud Engineer on Web View full Google Cloud Associate Cloud Engineer practice page
| Field | Detail |
|---|---|
| Exam route | Google Cloud Associate Cloud Engineer |
| Topic area | Deploying and Implementing a Cloud Solution |
| Blueprint weight | 30% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Deploying and Implementing a Cloud Solution for Google Cloud Associate Cloud Engineer. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 30% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Deploying and Implementing a Cloud Solution
Which statement best describes how Cloud DNS provides internal DNS resolution for Compute Engine VM instances and GKE Pods in a Google Cloud VPC network?
Options:
A. To use Cloud DNS internally, each VM and GKE node must be manually configured to query a custom DNS IP address instead of the default VPC DNS resolver.
B. Cloud DNS internal zones are project-wide, so any VM or GKE cluster in the same project can resolve them, regardless of which VPC network they use.
C. Cloud DNS private managed zones are attached to one or more VPC networks, and workloads in those networks automatically use the VPC’s internal DNS resolver to resolve records from the attached private zones.
D. Cloud DNS only supports public DNS zones, so internal resolution for VMs and GKE clusters must be implemented with custom DNS servers running on Compute Engine.
Best answer: C
Explanation: Cloud DNS integrates tightly with Google Cloud VPC networks through private managed zones. A private managed zone is a DNS zone whose visibility is limited to one or more attached VPC networks.
When a Compute Engine VM or a GKE node in a VPC performs a DNS lookup, it sends the request to the VPC’s internal DNS resolver (automatically configured via DHCP). That resolver evaluates the query against internal Google-provided records (such as internal instance hostnames) and any Cloud DNS private zones attached to the VPC. If the queried name matches a private zone, the resolver returns the corresponding private record; otherwise, it can forward to public DNS as appropriate.
GKE Pods typically use the node’s resolver (through kube-dns or CoreDNS). Because that resolver also points at the VPC’s internal DNS, Pods inherit the same behavior: they can resolve records from any Cloud DNS private zone attached to the cluster’s VPC network.
The key concept is that visibility is controlled by VPC network attachments, not just by project membership, and clients do not need any manual DNS configuration to benefit from Cloud DNS private zones.
Topic: Deploying and Implementing a Cloud Solution
Which TWO of the following statements about configuring Pod CPU and memory requests/limits in GKE Autopilot are INCORRECT? (Select TWO.)
Options:
A. If you request more CPU or memory than your workload typically needs, Autopilot will still bill you for the full requested amount while the Pod is running, even if actual usage is lower.
B. In GKE Autopilot, you define CPU and memory requests/limits in the Pod spec, and Autopilot automatically provisions and manages the underlying nodes to satisfy those requests.
C. To change a Pod’s resource request in Autopilot, you must first resize the underlying nodes in the cluster so that their machine types match the new Pod size.
D. Autopilot billing is based only on actual CPU and memory usage sampled over time, so you can safely over-request resources without increasing cost.
E. Setting very low CPU and memory requests can cause Pods to be packed densely on nodes, increasing the risk of CPU throttling or out-of-memory events during traffic spikes.
Correct answers: C and D
Explanation: GKE Autopilot is a fully managed mode where you manage resources at the Pod level through CPU and memory requests and limits. Autopilot then provisions and manages the underlying nodes for you, and your primary cost driver is the amount of CPU and memory you request for your Pods, not the actual utilization.
Because billing is based on requested resources, over-requesting CPU or memory will increase cost even if the workload does not use all of that capacity. Conversely, setting requests too low can hurt performance and reliability because Pods may be packed too tightly, causing CPU throttling or out-of-memory (OOM) events during spikes.
You never manually resize or pick node machine types in Autopilot. To adjust resource consumption, scheduling, or cost, you modify the Pod (or higher-level controller) resource requests/limits, and Autopilot transparently adjusts the node infrastructure to meet those requirements.
Topic: Deploying and Implementing a Cloud Solution
Your security team wants to audit who changes IAM policies and which API requests are blocked by organization policies in your Google Cloud projects. They prefer to rely on audit logs that are already enabled by default and avoid increasing log volume and cost. What should you do?
Options:
A. Enable System Event audit logs in each project, because they are disabled by default and required to log IAM changes.
B. Rely on Cloud Audit Logs Admin Activity and Policy Denied logs, which are enabled by default and record configuration changes and policy-denied requests for supported services.
C. Configure VPC Flow Logs and Firewall Rules Logging on all VPC networks, because these network logs include IAM changes and organization policy denials.
D. Enable Data Access audit logs for all services, because only Data Access logs record IAM policy changes and policy denials.
Best answer: B
Explanation: Cloud Audit Logs provides several types of audit logs: Admin Activity, Data Access, System Events, and Policy Denied. For most Google Cloud services, Admin Activity, System Event, and Policy Denied logs are enabled by default and cannot be fully disabled. Data Access logs are more verbose and typically must be explicitly enabled, which can increase log volume and cost.
To track who changes IAM policies or other configuration settings, you use Admin Activity logs, which record administrative operations such as creating, updating, or deleting resources and IAM policies. To track API requests that are blocked by organization policies or VPC Service Controls, you use Policy Denied logs, which record the denied requests and the reason.
Because the scenario explicitly wants to rely on logging that is already enabled by default and to avoid additional log volume and cost, the best approach is to use the default Admin Activity and Policy Denied audit logs, rather than enabling the more verbose Data Access logs or unrelated logging features.
Topic: Deploying and Implementing a Cloud Solution
You are reviewing a design document that lists cases where your team plans to reserve static external IPv4 addresses in Google Cloud. You must identify any unnecessary or wasteful uses. Which of the following planned uses is INCORRECT and should be avoided?
Options:
A. Assigning static external IPs to short-lived development VMs created by CI pipelines, which have no DNS records and only access the internet to download package updates.
B. Assigning a static external IP to a VPN gateway used for a site-to-site VPN to your on-premises firewall, which expects connections from a fixed peer IP.
C. Assigning a static external IP to a customer-facing web app exposed through an external HTTP(S) load balancer that is referenced by a DNS A record.
D. Assigning a static external IP to a Cloud NAT gateway whose address must be shared with a payment provider so they can allowlist outbound calls from your VMs.
Best answer: A
Explanation: Static external IP addresses in Google Cloud are primarily useful when an endpoint must remain stable over time—for example, when it is referenced in public DNS records or allowlisted by partners, firewalls, or other systems. In those cases, changing the IP would require coordinated updates and could cause outages.
For resources that are short-lived, not exposed externally by name, and not referenced in any allowlists, static IPs provide little benefit. Ephemeral addresses (or no external IP combined with Cloud NAT) are usually sufficient and reduce administrative overhead. Reserving static IPs in those situations is an anti-pattern and does not align with common best practices for managing networking resources.
In this question, the design document mixes valid static-IP use cases (DNS, allowlisting, VPN) with one case that adds static IPs where they are not needed. The task is to recognize that unnecessary use as the incorrect plan.
Topic: Deploying and Implementing a Cloud Solution
You are troubleshooting frequent false-positive alerts in Cloud Monitoring and plan to use Gemini Cloud Assist to improve your alerting. Which of the following is NOT an appropriate way to use Gemini Cloud Assist in this situation?
Options:
A. Use Gemini Cloud Assist to suggest more appropriate alert thresholds and conditions, then manually refine and apply them
B. Ask Gemini Cloud Assist to summarize recent metric trends and explain possible reasons for CPU and latency spikes
C. Automatically accept and deploy Gemini Cloud Assist’s suggested alerting changes directly to production without human review or testing
D. Ask Gemini Cloud Assist to identify missing metrics or log-based signals that could reduce false positives in existing alerts
Best answer: C
Explanation: Gemini Cloud Assist for Cloud Monitoring is designed to help engineers interpret observability data, identify patterns or anomalies, and refine alerting policies. It provides insights and recommendations, but it is not a substitute for human judgment or change control.
Using Gemini to summarize metrics, suggest better thresholds, or propose additional signals is appropriate because it accelerates analysis while engineers still review, test, and implement any changes. This fits an advisory, assistant-style role for AI.
However, allowing Gemini’s suggestions to be deployed automatically to production alerting configurations without human review is an unsafe operational practice. Alert policies directly affect incident response and reliability; changes should follow standard review and testing processes to avoid missed incidents or excessive alert noise.
Topic: Deploying and Implementing a Cloud Solution
Your security team cannot find log files older than 30 days in a Cloud Storage bucket that receives exported audit logs. Compliance requires keeping logs for 1 year. You discover a lifecycle rule on the bucket that deletes objects 30 days after creation. What should you do to meet the requirement?
Options:
A. Change the bucket’s storage class to Archive so objects are automatically retained for 1 year.
B. Update the bucket lifecycle rule to delete objects 365 days after creation instead of 30 days.
C. Increase the default log retention in Cloud Logging to 365 days for the exported log buckets.
D. Enable object versioning on the Cloud Storage bucket so previous object versions are preserved beyond 30 days.
Best answer: B
Explanation: The scenario describes exported audit logs being deleted from a Cloud Storage bucket after 30 days due to an existing lifecycle rule. Compliance, however, requires that logs be retained for 1 year. In Cloud Storage, lifecycle rules control automatic actions such as deleting objects after a specified age.
To fix the problem, you must adjust the lifecycle condition so that objects are not deleted before the required retention period. Updating the rule from 30 days to 365 days (or another appropriate 1‑year value) ensures logs remain available long enough while still automatically cleaning up old data to manage storage costs.
Other configuration changes like storage class, Cloud Logging retention, or versioning do not override a delete lifecycle rule and therefore do not solve the underlying cause of the missing objects.
Topic: Deploying and Implementing a Cloud Solution
You manage a single Compute Engine VM in a custom VPC. The VM has no external IP, and SSH from the Cloud Console and Cloud Shell fails with a timeout. You must:
What should you do to best meet these goals?
Options:
A. Assign an ephemeral external IP to the VM, open a firewall rule for tcp:22 from 0.0.0.0/0, and have engineers upload their SSH keys in the metadata.
B. Enable OS Login and IAP TCP forwarding on the project, grant engineers suitable OS Login and IAP tunnel roles, and add a firewall rule allowing tcp:22 from the IAP IP range to the VM.
C. Set up a site-to-site Cloud VPN from your office network to the VPC, restrict the firewall to office IP ranges, and manage local Linux users directly on the VM.
D. Create a Cloud NAT gateway for the subnet and rely on NAT to allow SSH from the Cloud Console and Cloud Shell without changing any firewall rules.
Best answer: B
Explanation: The scenario is about securely connecting to a Compute Engine VM using SSH when the VM has no external IP. SSH from the Cloud Console and Cloud Shell uses Google-managed mechanisms (either traditional browser/Cloud Shell SSH to an external IP, or Identity-Aware Proxy (IAP) TCP forwarding for VMs without external IPs). The goals explicitly require keeping the VM private (no public IP), avoiding broad SSH exposure to the internet, and managing access via IAM rather than manual SSH keys.
Using OS Login with IAP TCP forwarding is the recommended way to meet these goals. OS Login maps IAM roles to Linux accounts and SSH permissions, while IAP TCP forwarding allows engineers to reach port 22 on an internal-only VM through a Google-managed proxy. A firewall rule must explicitly allow tcp:22 from the IAP proxy IP range (35.235.240.0/20) to the VM, and engineers must have both OS Login roles and the IAP tunnel role.
Other approaches either break the security requirement by exposing a public SSH endpoint, misunderstand what Cloud NAT does, or add unnecessary complexity and ignore the IAM-based access requirement.
Topic: Deploying and Implementing a Cloud Solution
You deployed a public web app on a single Compute Engine VM and created a DNS A record pointing to the VM’s external IP. After you restart the VM for maintenance, users report that the site is unreachable until you manually update the DNS record to the new IP. What should you do to prevent this issue in the future?
Options:
A. Reserve a static external IP address in the same region and assign it to the VM’s network interface, then keep the DNS A record pointing to that IP.
B. Create a startup script on the VM that calls the DNS provider’s API to update the A record with the VM’s current external IP on each boot.
C. Update the DNS record to use the VM’s internal IP address instead of the external IP, so that it no longer changes on restart.
D. Lower the DNS record’s TTL so that clients refresh the VM’s external IP more frequently after each restart.
Best answer: A
Explanation: When you assign an ephemeral external IP to a Compute Engine VM, that IP can change whenever the VM is stopped and started or recreated. If a public DNS A record points to that ephemeral IP, any change to the IP breaks connectivity until DNS is updated to the new address.
To provide a stable endpoint for DNS or IP whitelisting, you should reserve a static external IP address and attach it to the resource (such as a VM or load balancer). A static external IP remains associated with your project and does not change when the VM is restarted, so the DNS record can safely reference that IP long-term.
In this scenario, the repeated need to update DNS after VM restarts clearly indicates that the root cause is the changing ephemeral external IP. Reserving and binding a static external IP to the VM directly addresses that cause and is the simplest, Google-recommended fix at this scale.
Topic: Deploying and Implementing a Cloud Solution
You manage a Cloud Storage bucket named reports-bucket that currently uses fine-grained object ACLs. You need to: (1) stop using object ACLs and manage access only with IAM, and (2) allow the Google Group finance-analysts@example.com to read objects, while only the service account etl-writer@project-id.iam.gserviceaccount.com can upload new objects. Other project members must not gain access to this bucket. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Grant roles/storage.admin on reports-bucket to the Google Group finance-analysts@example.com so they can manage the bucket and objects themselves.
B. Grant roles/storage.objectViewer on reports-bucket to the Google Group finance-analysts@example.com, and grant roles/storage.objectCreator on reports-bucket to etl-writer@project-id.iam.gserviceaccount.com.
C. Enable uniform bucket-level access on reports-bucket so that access is controlled only by IAM policies and existing object ACLs are ignored.
D. Assign roles/storage.objectAdmin on the entire project to etl-writer@project-id.iam.gserviceaccount.com so it can create and delete objects in any bucket in the project.
E. Keep fine-grained access in place and configure object ACLs that grant READER to finance-analysts@example.com and WRITER to etl-writer@project-id.iam.gserviceaccount.com.
Correct answers: B and C
Explanation: Cloud Storage supports two primary access control models: fine-grained object ACLs and IAM-based control with uniform bucket-level access. When you enable uniform bucket-level access on a bucket, object ACLs are disabled and IAM policies (roles and members) become the single source of truth for authorizing access to that bucket and its objects.
In this scenario, the requirements are to stop using ACLs, manage access centrally with IAM, and grant read-only access to a finance group while allowing a specific service account to upload new objects without granting unnecessary privileges. The correct approach is to enable uniform bucket-level access and then attach least-privilege IAM roles at the bucket level to the appropriate principals.
roles/storage.objectViewer allows listing and reading objects but does not allow modification or deletion. roles/storage.objectCreator allows uploading new objects but not overwriting or deleting existing ones. Applying these roles at the bucket level ensures permissions are limited to reports-bucket and do not unintentionally apply to other buckets in the project.
Topic: Deploying and Implementing a Cloud Solution
Your team runs a production Cloud SQL for PostgreSQL instance with automated backups and point-in-time recovery (PITR) enabled. A developer accidentally truncates a single important table. You want to recover the table while minimizing downtime and avoiding additional data loss. Which action is INCORRECT?
Options:
A. Use point-in-time recovery to restore the entire production instance in place to a timestamp just before the truncation, permanently discarding all writes made after that time.
B. If the exact time of the truncation is unknown, use Cloud Logging to approximate when it occurred, choose the closest earlier timestamp within the PITR window for a point-in-time clone, and reconcile any missing rows during import.
C. Restore the most recent automated backup taken before the truncation to a new temporary instance, verify the recovered table’s contents, and then copy the data into the production instance.
D. Create a point-in-time clone of the instance from just before the truncation, then export the affected table from the clone and import it into the production instance.
Best answer: A
Explanation: Cloud SQL automated backups and point-in-time recovery (PITR) are designed to help you recover from accidental data loss, such as a dropped or truncated table. When only a subset of data is affected and production is otherwise healthy, you normally restore to a separate instance (via backups or PITR clones) and then copy back only the needed data.
Overwriting the live production instance with an earlier state should be a last resort, because it discards all changes made after that time. In this scenario, only one table was truncated and the requirement is to minimize downtime and avoid additional data loss. The safest pattern is to create a point-in-time clone or restore a backup to a new instance, verify the data, and then import or copy back the affected table into production.
The choice that performs an in-place restore of the entire production instance to an earlier timestamp violates these goals and is therefore the incorrect response compared with the clone-based and backup-to-new-instance approaches.
Use the Google Cloud Associate Cloud Engineer Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try Google Cloud Associate Cloud Engineer on Web View Google Cloud Associate Cloud Engineer Practice Test
Read the Google Cloud Associate Cloud Engineer Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.