Terraform Associate (004) sample questions, mock-exam practice, and simulator access with detailed explanations in IT Mastery on web, iOS, and Android.
HashiCorp Certified: Terraform Associate (004) focuses on infrastructure-as-code fundamentals, Terraform workflow, configuration, modules, state management, and safe change operations. If you are searching for Terraform Associate (004) sample questions, a practice test, mock exam, or exam simulator, this is the main IT Mastery page to start on web and continue on iOS or Android with the same account.
Start a practice session for HashiCorp Certified: Terraform Associate (004) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account used on mobile.
Prefer to practice on your phone or tablet? Download the IT Mastery – AWS, Azure, GCP & CompTIA exam prep app for iOS or IT Mastery app on Google Play (Android) and then sign in with the same account on web to continue your sessions on desktop.
Foundational Terraform Associate (004) companion practice across IaC basics, workflow, HCL configuration, modules, state, and HCP Terraform for HashiCorp’s hour-long exam.
| Domain | Weight |
|---|---|
| Infrastructure as Code (IaC) with Terraform | 8% |
| Terraform fundamentals | 11% |
| Core Terraform workflow | 19% |
| Terraform configuration | 22% |
| Terraform modules | 11% |
| Terraform state management | 11% |
| Maintain infrastructure with Terraform | 8% |
| HCP Terraform | 10% |
These sample questions are drawn from the current local bank for this exact exam code. Use them to check your readiness here, then continue into the full IT Mastery question bank for broader timed coverage.
Why do teams commonly store Terraform state in a remote backend instead of keeping it only on a local machine?
Options:
Best answer: A
Explanation: Remote state is mainly about collaboration and control. A remote backend provides a shared source of truth for Terraform state, which helps teams coordinate changes more safely and manage access to that state.
Terraform state records the resources Terraform manages and how they map to your configuration. If that state stays only on one engineer’s machine, collaboration becomes fragile because other team members or automation may not use the same current state. A remote backend stores the state in a central location so shared workflows use one source of truth.
This is useful because teams can control access to the state, avoid accidental overwrites more easily, and support more reliable multi-user workflows. Many remote backends also support locking or similar coordination features, which further reduces conflicts during runs. That is different from features that define reusable code, configure providers, or pin provider dependencies.
After you change a Terraform backend block to use a different remote backend, which command should you run so Terraform prepares the new backend and can safely migrate or reconnect state?
Options:
terraform initterraform applyterraform planterraform validateBest answer: A
Explanation: Backend changes are handled during initialization, not during planning or applying. Re-running terraform init prepares the new backend and lets Terraform migrate existing state or reconfigure the backend connection safely.
When the backend block changes, Terraform treats that as a backend reinitialization event. The correct command is terraform init, because init prepares the working directory and backend before any plan or apply can use them. During this step, Terraform detects that the backend configuration changed and can prompt you to migrate existing state to the new backend or reconfigure the directory to use the backend without migration, depending on your intent.
You may also use options such as -migrate-state or -reconfigure with terraform init when appropriate.
The key takeaway is that backend changes are resolved during initialization, not during validation, planning, or resource creation.
terraform plan reads configuration and state, but it does not initialize or switch backends.terraform apply assumes initialization is already complete and focuses on making infrastructure changes.terraform validate checks configuration correctness, not backend setup or state migration.A team has a production bucket that was created manually before Terraform was adopted. They have already written the matching Terraform resource block used in other environments. They must avoid recreating the bucket, bring it under Terraform management, and review any proposed changes safely before updating production. What is the best next step?
Options:
plandata sourceterraform apply to let Terraform adopt the bucketBest answer: A
Explanation: Use import when the infrastructure object already exists and you want Terraform to start managing it. Import links the real resource to a resource address in state, which lets terraform plan show differences before any changes are applied.
terraform import is the workflow for bringing an existing resource under Terraform management without recreating it. The key idea is that import associates a real infrastructure object with a Terraform resource address in the state file. Once that association exists, Terraform can compare the current configuration to the real object and show proposed changes with terraform plan before anything is modified.
This is the safest approach for manually created production infrastructure when the team wants reuse, consistency, and change review. A data source only reads information about an existing object; it does not make Terraform manage it. terraform apply does not automatically adopt unmanaged resources, and recreating production infrastructure just to get it into state is risky and unnecessary.
data source only reads the bucket; it does not place the bucket under Terraform management.terraform apply does not automatically claim an existing unmanaged resource for state.A team uses an external module from the Terraform Registry across several shared configurations.
1module "network" {
2 source = "acme/network/aws"
3 version = ?
4}
They want predictable upgrades without automatically adopting every future release. Which value best follows that safer default?
Options:
version to ">= 2.4.0, < 3.0.0"version so Terraform can use newer module releasesversion to ">= 2.4.0"version and rely on .terraform.lock.hcl to pin the moduleBest answer: A
Explanation: A bounded module version constraint is a safer default for external or shared modules because it limits upgrades to a known range. Unbounded selection can introduce unexpected changes, and .terraform.lock.hcl does not lock module versions.
For reusable modules from an external source, safer version management means pinning to an exact version or using a bounded range. A bounded constraint such as >= 2.4.0, < 3.0.0 allows planned updates within a defined scope while blocking automatic jumps to future major versions that may introduce breaking changes.
A lower-bound-only constraint like >= 2.4.0 is still open-ended, so Terraform may select much newer releases later. Omitting the version argument is even less predictable for registry modules. Also, the dependency lock file records provider selections, not module versions.
The key takeaway is to constrain shared external modules explicitly instead of allowing unbounded upgrades.
>= 2.4.0 is unbounded upward, so later major releases can still be selected.version reduces predictability because newer registry releases may be used later..terraform.lock.hcl is a common confusion; the lock file tracks providers, not module versions.Which statement best describes what terraform init does in a Terraform working directory?
Options:
Best answer: C
Explanation: terraform init is Terraform’s setup step for a working directory. It installs required providers, downloads referenced modules, and initializes any configured backend so later commands like plan and apply can run correctly.
terraform init prepares a Terraform working directory before normal operations begin. It reads the configuration, installs the required provider plugins, fetches any referenced child modules, and initializes the configured backend that Terraform will use for state storage and access. Because of that, init is usually the first command run in a new directory, and it may need to be run again after changing backend settings or dependency requirements.
It does not create infrastructure, fix formatting, or act as the main configuration-checking step. Those tasks belong to other commands:
terraform apply makes infrastructure changes.terraform validate checks whether the configuration is valid.terraform fmt normalizes configuration formatting.The key takeaway is that init prepares dependencies and state access, not infrastructure changes.
terraform apply, not initialization.terraform validate, which is separate from provider, module, and backend setup.terraform fmt, which only rewrites configuration style.Which description best matches resource drift in Terraform?
Options:
.terraform.lock.hcl after terraform init.Best answer: C
Explanation: Resource drift occurs when the real infrastructure changes outside Terraform and no longer matches what Terraform state or configuration expects. Terraform usually reveals this during a plan when it compares current remote objects with its known and desired values.
Terraform state stores Terraform’s last known information about managed resources, and the configuration declares the desired state. Resource drift happens when someone or something changes the real infrastructure outside Terraform, such as through a cloud console, API call, or another tool, so the actual environment no longer matches that recorded or desired view.
When Terraform runs a plan, it reads the current remote resource data and compares it with state and configuration. If differences appear that were not introduced through Terraform, that is drift. This is different from changing a backend, updating a provider lock file, or editing configuration, because those are workflow or configuration changes rather than unexpected changes to live infrastructure.
.terraform.lock.hcl changes provider version selection, not the actual deployed resource values.A teammate says this plan proves infrastructure drift because the team only reformatted the Terraform files. Based on the exhibit, what is the best interpretation?
1# main.tf (partial)
2variable "environment" {
3 type = string
4}
5
6resource "aws_instance" "web" {
7 tags = {
8 Environment = var.environment
9 }
10}
11
12# terraform.tfvars
13environment = "prod"
14
15# plan fragment
16~ resource "aws_instance" "web" {
17 ~ tags = {
18 ~ "Environment" = "dev" -> "prod"
19 }
20 }
Options:
terraform fmt changing the configuration layout.prod.Best answer: D
Explanation: The tag value comes from var.environment, and the exhibit shows that terraform.tfvars currently sets it to prod. That means the visible reason for the plan is a desired configuration/input difference, not formatting, and the exhibit alone does not prove true drift.
True drift means the real resource was changed outside Terraform so the refreshed state no longer matches what Terraform expects. By contrast, terraform fmt only changes file formatting; it does not change evaluated resource arguments.
In this exhibit, the Environment tag is driven by var.environment, and the current input file sets that variable to prod. The plan shows the current value is dev, so Terraform wants to update the resource to match the desired configuration. That makes an input-driven configuration difference the best-supported explanation for the unexpected action. The snippet does not prove who set the current value to dev; it only shows that Terraform now wants prod.
The key takeaway is that an unexpected plan is not automatically proof of infrastructure drift.
prod, but not evidence of an out-of-band edit.terraform fmt fails because formatting changes whitespace and layout, not the evaluated tag value.dev to prod diff is a normal planned change, not a sign to edit state manually.A Terraform variable must accept a structured value with named attributes and predefined attribute types, such as:
1{
2 name = "web"
3 size = "small"
4 count = 2
5}
Which Terraform type constraint is the best match?
Options:
tuple([string, string, number])object({ name = string, size = string, count = number })set(string)list(string)Best answer: B
Explanation: Use object when a value has named fields and each field has a defined type. The example uses attribute names like name, size, and count, so an object type constraint matches the structure directly.
In Terraform, object is the complex type used for structured data with named attributes. Each attribute can have its own declared type, which makes object({ name = string, size = string, count = number }) a direct match for the example value.
A tuple can also mix types, but it is positional, not named. list and set are collections of similar element types, so they do not model a record with specific attribute names. When a configuration needs a single structured value with predictable fields, object is the most precise choice.
The key distinction is named attributes versus positional or repeated elements.
tuple defines values by position, not by attribute names.list(string) requires all elements to be strings in an ordered sequence.set(string) is an unordered collection of unique strings, not a named structure.A cloud engineer is using Terraform from a local laptop and needs the CLI to authenticate to HCP Terraform before working with remote workspaces. Which command is the standard way to do this?
Options:
terraform workspace selectterraform validateterraform initterraform loginBest answer: D
Explanation: terraform login is the standard CLI authentication workflow for HCP Terraform from a local development environment. It obtains a user token and stores it locally so later Terraform commands can authenticate to HCP Terraform.
The core concept is CLI authentication to HCP Terraform. From a local workstation, the standard Terraform workflow is to run terraform login, which guides the user through obtaining an HCP Terraform token and saving it in the local CLI credentials file. That token is then used by Terraform when it needs to interact with HCP Terraform.
This is separate from provider authentication. For example, authenticating to AWS, Azure, or GCP is handled through provider-specific credentials, not by terraform login. It is also separate from initialization and validation commands.
The key takeaway is that terraform login authenticates the Terraform CLI itself to HCP Terraform.
terraform init prepares the working directory, installs providers and modules, and configures backend or cloud settings, but it does not sign in.terraform validate checks configuration correctness and does not perform any HCP Terraform authentication.terraform workspace select changes the current workspace context and does not create or store CLI credentials.A user runs terraform apply against a backend that supports state locking and sees this message:
1Error: Error acquiring the state lock
What does this most likely indicate?
Options:
terraform init before continuing.Best answer: D
Explanation: State locking exists to prevent concurrent updates to the same Terraform state. A lock wait or lock acquisition failure points to shared-state coordination, not to invalid HCL or other configuration logic errors.
A state lock protects the state file from simultaneous writes. If Terraform waits for or fails to acquire that lock, the usual cause is that another Terraform run is already operating on the same state, or a stale lock was left behind. That makes this a coordination problem around shared state, not evidence that the configuration itself is invalid.
Typical next steps are to wait for the other run to finish, confirm whether another process is active, and investigate a stale lock before using terraform force-unlock carefully. By contrast, invalid syntax, undefined references, or missing initialization produce validation, planning, or initialization errors rather than a state lock message.
The key takeaway is that lock-related errors are about safe state access, not about HCL correctness.
terraform validate, or planning, not as a lock-acquisition message.init is appropriate after certain backend or module source changes, but it does not explain a state lock error.A team wants one Terraform workflow to manage resources in AWS and Azure without learning separate vendor-specific provisioning languages. Which Terraform concept makes this possible?
Options:
Best answer: D
Explanation: Terraform supports multi-cloud operations through providers. You use the same HCL language and core workflow, while each provider handles the vendor-specific API details for its platform.
Terraform is cloud-agnostic at the workflow level. You write configuration in HCL and use the same core commands such as terraform init, terraform plan, and terraform apply whether you manage one cloud or several. Providers are the integration layer: each provider knows how to translate Terraform resource definitions into that platform’s API calls. That is why Terraform can coordinate AWS, Azure, GCP, and other services in a consistent way without forcing teams to adopt one vendor’s native provisioning model. Backends and state are important for storing and tracking infrastructure, but they do not provide the cross-cloud abstraction themselves.
A Terraform team needs safe collaboration so two engineers cannot run conflicting updates against the same infrastructure state. Which Terraform concept defines where state is stored and can provide state locking when the chosen implementation supports it?
Options:
Best answer: A
Explanation: A backend is the Terraform concept that determines where state is stored and whether features like locking are available. That makes it central to safe team collaboration, especially when multiple users or runs might access the same state.
In Terraform, the backend defines how state is loaded, stored, and accessed. For collaboration, this matters because some backends support state locking, which prevents two users or runs from changing the same state at the same time. That protection is different from the state data itself: the state file records infrastructure mappings, but locking behavior comes from the backend implementation that manages access to that state.
Common examples of collaboration-friendly backends include HCP Terraform and certain remote storage integrations that support locking. By contrast, simply having a state file or shared storage location does not automatically provide safe coordination.
The key distinction is that state is the data, while the backend is the mechanism that manages where that data lives and whether locking is available.
A team stores state in HCP Terraform. They added a minimal bucket resource to a shared module and then imported an existing production bucket:
1resource "aws_s3_bucket" "logs" {}
They need safe plan review and the same module-based pattern across environments. What is the best next action?
Options:
Best answer: B
Explanation: Import associates an existing object with a Terraform resource address in state, but state is not a substitute for configuration. To safely review future changes and keep environments consistent, the team should update the shared module so it describes the imported bucket and then review a plan.
The core concept is that terraform import brings an existing object under Terraform state, but Terraform still uses configuration as the source of desired state. After import, the resource or module configuration must describe the imported object closely enough that a plan reflects intentional changes rather than surprises.
In this scenario, the safest next step is to update the shared module so it matches the bucket’s current settings, then review the plan. That supports reuse across environments and gives the team a reliable change review step before any apply.
A state entry alone does not define how Terraform should manage the object, and read-only workflows like data sources or refresh-only runs do not replace real resource configuration.
A single Terraform root module must manage Azure resources, AWS resources, and some AWS resources in a second region. Which Terraform approach is correct?
Options:
Best answer: D
Explanation: Terraform can use multiple provider types in one configuration, and it can also use multiple configurations of the same provider at the same time. Extra instances of the same provider use alias, and resources can reference the specific provider configuration they need.
In Terraform, provider configuration is separate from resource definitions. A single root module can use more than one provider type, such as AWS and Azure, and it can also define more than one configuration for the same provider. When you need a second instance of the same provider, such as AWS in another region, you add another provider block with an alias.
Resources that should use the non-default configuration reference that provider instance explicitly. This is the standard way to manage resources across platforms or across multiple instances of one provider from the same configuration.
A backend controls where state is stored, and workspaces separate state snapshots; neither one selects provider instances.
During local development, a team wants a quick CI step to catch Terraform configuration errors before running a plan or changing infrastructure. Which command is designed for this purpose?
Options:
terraform fmtterraform applyterraform validateterraform planBest answer: C
Explanation: terraform validate is used to verify that a Terraform configuration is syntactically valid and internally consistent. That makes it the right early check for local development and CI before generating a plan or applying changes.
terraform validate is the command Terraform provides to test whether configuration files are valid. It checks HCL syntax, argument placement, references, and whether the configuration is internally consistent. Because it does not create infrastructure, it fits early workflow stages such as developer prechecks and CI quality gates.
A common sequence is:
terraform fmt to normalize formattingterraform validate to catch configuration errorsterraform plan to preview proposed changesterraform apply to make changesThe closest distractor is terraform plan, which can expose issues too, but its main job is to calculate changes, not serve as the lightweight validation step.
terraform fmt only standardizes file formatting; it does not verify references or configuration correctness.terraform plan is mainly for previewing changes and is a later workflow step than a quick validation check.terraform apply performs real infrastructure changes, so it is not suitable for an early safety check.A team uses the same Terraform configuration in Git for dev and prod, but engineers still run terraform plan and terraform apply locally and keep separate local state files. They want to move to HCP Terraform so runs start from VCS changes, teammates can review plans before apply, and each environment’s existing state is preserved safely. What is the best next action?
Options:
terraform import for all existing resources.Best answer: B
Explanation: The best migration path is to create separate VCS-linked HCP Terraform workspaces for dev and prod, run plans and applies remotely, and migrate the existing state. That gives managed collaboration and review while keeping Terraform’s record of the current infrastructure intact.
When a team is moving from local Terraform execution into managed HCP collaboration, the standard approach is to create separate workspaces for environments that need isolated state, connect those workspaces to the shared VCS repository, and use remote execution. That satisfies the requirement for VCS-triggered runs and team review of plans before apply. Migrating each environment’s current state into its matching workspace preserves Terraform’s mapping to the already managed infrastructure, so the team does not need to rebuild state from scratch.
Using separate workspaces for dev and prod also keeps state isolated while reusing the same configuration source for consistency. The closest distractor is re-importing everything, but terraform import is mainly for resources missing from state, not for replacing a valid existing state file during a migration.
A team uses HCP Terraform and an existing child module to deploy the same application stack. They now need one configuration to deploy that module in us-east-1 and us-west-2 from a single reviewed plan. They want to keep the module reusable and control provider versions explicitly. What is the best next action?
Options:
Best answer: D
Explanation: This is a provider selection problem, not a code-organization problem. The best solution is a multi-provider configuration: set provider version constraints, define a second aliased provider configuration, and pass the correct provider to each module call so HCP Terraform can review one plan across both regions.
This scenario is about selecting different provider configurations for different module instances while keeping one reusable module and one reviewed run. In Terraform, that is a multi-provider configuration. You declare provider version constraints in required_providers, configure more than one provider instance, and use an alias for the nondefault instance.
Creating more modules changes code organization and reuse, but it does not by itself solve provider selection. Creating separate HCP Terraform workspaces or using CLI workspaces changes state boundaries and review flow, which conflicts with the single-plan requirement.
A cloud engineer has already run terraform init in a working directory and updated the configuration. Before making any real infrastructure changes, they want Terraform to show the proposed actions based on the current state and configuration. Which command should they run?
Options:
terraform applyterraform planterraform initterraform destroyBest answer: B
Explanation: Use terraform plan when you need to review what Terraform intends to change before anything is created, updated, or deleted. It compares configuration with state and shows the planned actions without applying them.
In the Terraform workflow, terraform plan is the review step. It reads the current configuration and state, then generates an execution plan showing what Terraform would add, change, or destroy if applied. This makes it the right choice when a user wants to inspect proposed infrastructure changes safely before modifying real resources. By contrast, terraform apply performs the changes, terraform init prepares the working directory by installing providers and setting up the backend, and terraform destroy is used to remove managed infrastructure. The key distinction is that plan previews, while apply changes.
terraform apply makes infrastructure changes instead of only previewing them.terraform init sets up the workspace and dependencies, not the change review step.terraform destroy plans and executes deletion of managed infrastructure.A team stores one Terraform module in Git and uses it in dev, stage, and prod. Before opening a pull request, they want all .tf files rewritten into Terraform’s standard style so reviews are easier and formatting is consistent across environments, but they do not want to change the intended infrastructure behavior. What is the best next action?
Options:
terraform validate before the pull request.terraform fmt in the module directory.terraform plan and review the plan output.Best answer: C
Explanation: terraform fmt is the command for rewriting Terraform configuration into canonical style. It helps teams keep code readable and consistent across environments without changing what resources Terraform is intended to manage.
The core concept is that terraform fmt changes formatting, not infrastructure logic. It rewrites Terraform configuration files into canonical Terraform style, such as indentation, spacing, and layout, so teams get more consistent code reviews and cleaner shared modules. That makes it a good fit when the goal is collaboration and cross-environment consistency without changing intended behavior.
terraform fmt is commonly used early in the workflow, often before commit or pull request review. It does not replace validation or planning; those serve different purposes. Formatting improves readability, while validation checks configuration correctness and planning previews infrastructure changes. The closest distractor is validation, because it checks the configuration but does not rewrite the files.
terraform validate is useful for checking syntax and internal consistency, but it does not rewrite configuration into canonical style.terraform plan helps review proposed infrastructure changes, but it does not format .tf files.A team wants to pass a single variable into a module:
1module "app" {
2 source = "./modules/app"
3
4 name = var.app.name
5 port = var.app.port
6 enable_tls = var.app.enable_tls
7}
Based on this configuration, which variable type is the best fit for var.app?
Options:
tuple([string, number, bool])map(string)object({ name = string, port = number, enable_tls = bool })list(string)Best answer: C
Explanation: The configuration reads var.app by named attributes, not numeric indexes. Because those attributes also have different types, the best fit is an object with explicit attribute definitions.
In Terraform, lists and tuples are for positional access, such as var.items[0]. Maps and objects are for named lookups, such as var.app.name. In this snippet, the caller is expected to provide one value with fixed attribute names: name, port, and enable_tls.
An object is the best match because it defines known attribute names and allows each attribute to have its own type. Here that naturally maps to a string, number, and boolean. A map also uses named keys, but all map values must share the same type, so map(string) would not fit this input shape.
The key clue is named attribute access plus mixed value types, which points to an object rather than a list or tuple.
var.app[0], not by attribute name.map(string) cannot model a number and a boolean alongside a string.A small platform team uses one Terraform repository for dev and prod. They want plans and applies to run without depending on an engineer’s laptop, with shared change review and centrally managed state. What is the best next step?
Options:
terraform plan and terraform apply locally, but require pull requests in Git first.Best answer: C
Explanation: HCP Terraform workspaces can execute Terraform runs remotely, so plans and applies do not have to depend on each engineer’s local machine. This also supports shared visibility into changes and keeps state managed centrally for safer collaboration.
The core concept is remote execution with HCP Terraform workspaces. A workspace can be linked to a Terraform configuration and run plan and apply remotely, which removes reliance on individual laptops and gives the team a consistent execution environment. That directly supports collaboration, safer change review, and centralized state handling.
In this scenario, the team has two environments and wants shared workflows, so separate workspaces for dev and prod are the appropriate organizational unit. HCP Terraform then becomes the place where runs are queued, reviewed, and applied.
Using Git alone improves code review, but it does not move execution off local machines. Modules improve reuse, but they do not provide remote runs. Terraform Enterprise with Sentinel is far beyond what is needed for this requirement.
A team uses the same Terraform configuration in separate HCP Terraform workspaces for dev and prod. They want each subnet to attach to the VPC created by Terraform in that workspace, avoid manually maintaining different IDs per environment, and keep the normal plan review before apply.
1resource "aws_vpc" "app" {
2 cidr_block = "10.0.0.0/16"
3}
4
5resource "aws_subnet" "app" {
6 cidr_block = "10.0.1.0/24"
7 vpc_id = "vpc-12345"
8}
What is the best change?
Options:
vpc_id = aws_vpc.app.idterraform_remote_statevpc_id into an HCP Terraform workspace variableterraform import for the subnet before each applyBest answer: A
Explanation: Use a direct resource attribute reference when one Terraform-managed object needs a value from another. Referencing aws_vpc.app.id keeps the configuration reusable across workspaces and preserves the normal HCP Terraform plan-and-apply workflow.
Terraform connects related managed resources by referencing an attribute from one resource inside another resource block. Here, the subnet needs the ID of the VPC that Terraform is already creating, so vpc_id = aws_vpc.app.id is the simplest and safest choice. Terraform will infer that the subnet depends on the VPC, show that relationship in the plan, and create them in the correct order during apply. Because each HCP Terraform workspace has its own state, the same configuration works across dev and prod without manually supplying different VPC IDs.
.id, .arn, or .name from managed resourcesManual variables or remote-state lookups are better when the value comes from outside the current configuration, not when both objects are managed together.
In Terraform, one managed resource often needs an ID, name, or other value that is computed by another managed resource in the same configuration. Which construct is the normal way to pass that value between them?
Options:
aws_vpc.main.idBest answer: B
Explanation: Terraform normally connects dependent managed resources with direct references to resource attributes, such as aws_vpc.main.id. This passes the computed value directly and creates an implicit dependency so Terraform can build the graph and order operations correctly.
Cross-resource references are the standard Terraform pattern for wiring managed resources together. When one resource needs a value produced by another, you reference the source resource’s attribute directly, such as vpc_id = aws_vpc.main.id. Terraform uses that reference for both value flow and dependency tracking, so it can plan and apply resources in the correct order.
Input variables are for values entering a module. Output values are for exposing values from a module to its parent or to external consumers. Data sources are mainly for reading existing infrastructure rather than passing a just-computed value between resources Terraform is already managing in the same configuration.
The key takeaway is to connect dependent resources with direct attribute references instead of manually copying values.
A teammate opened this Terraform file during code review:
1variable "instance_type"{
2type=string
3}
4
5output "chosen_type"{
6value=var.instnce_type
7}
They want to check the configuration for correctness issues like the bad reference, rather than only normalize spacing and layout. Which Terraform command is the best next step?
Options:
terraform planterraform validateterraform initterraform fmtBest answer: B
Explanation: terraform validate is the command for checking whether a Terraform configuration is syntactically valid and internally consistent. In this snippet, it can catch that var.instnce_type does not match the declared variable, while terraform fmt would only rewrite formatting.
Terraform separates code style from configuration correctness. terraform fmt rewrites HCL into Terraform’s canonical style, such as indentation, spacing, and brace layout. terraform validate checks whether the configuration is valid for Terraform to use, including whether references point to declared values.
In the snippet, the declared input variable is instance_type, but the output refers to var.instnce_type. That mismatch is a correctness problem, not a formatting problem, so validation is the right check.
A simple workflow is:
terraform fmt to normalize styleterraform validate to check configuration correctnessterraform plan to preview infrastructure changesThe key takeaway is that formatting improves readability, while validation checks whether Terraform can correctly interpret the configuration.
terraform fmt is useful for style normalization, but it does not verify that var.instnce_type is a valid reference.terraform plan can surface configuration issues, but it is a later step used to preview changes rather than the dedicated correctness check.terraform init prepares the working directory and dependencies, but it does not by itself confirm that the configuration is internally valid.