Try 30 free Terraform Associate (004) questions across the exam domains, with explanations, then continue with full IT Mastery practice.
This free full-length Terraform Associate (004) practice exam includes 30 original IT Mastery questions across the exam domains.
These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
Need concept review first? Read the Terraform Associate (004) Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try Terraform Associate (004) on Web View full Terraform Associate (004) practice page
| Domain | Weight |
|---|---|
| Infrastructure as Code (IaC) with Terraform | 8% |
| Terraform Fundamentals | 11% |
| Core Terraform Workflow | 19% |
| Terraform Configuration | 22% |
| Terraform Modules | 11% |
| Terraform State Management | 11% |
| Maintain Infrastructure with Terraform | 8% |
| HCP Terraform | 10% |
Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.
Topic: Terraform Modules
A team reuses a registry module across several environments:
module "network" {
source = "appcorp/network/aws"
version = "~> 2.4.0"
}
What is the main benefit of including the version argument in this module block?
Options:
A. It stores the module’s resources in a separate state file.
B. It keeps module selection predictable and reduces unexpected breaking changes.
C. It pins the provider plugin versions used by the module.
D. It automatically upgrades the module to any future major release.
Best answer: B
Explanation: Module version constraints tell Terraform which registry module releases are acceptable. That keeps runs more predictable across environments and helps prevent a newer module release from introducing breaking changes unexpectedly. Teams can then upgrade on purpose instead of by surprise.
In a module block, the version argument constrains which releases Terraform can install from a registry source. This matters for reused modules because teams depend on stable inputs, outputs, and behavior across environments. Without a constraint, a fresh working directory or an intentional upgrade can pull a newer module release, including one with breaking changes. By setting an allowed version range, teams make runs more predictable and choose when to test and adopt newer module versions.
Version constraints control acceptable module releases; they do not pin provider plugins or change how Terraform state is stored.
Topic: Terraform Modules
A teammate says the version line in the module "network" block is pinning the AWS provider. Review the configuration.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.31"
}
}
}
module "network" {
source = "examplecorp/network/aws"
version = "1.4.2"
}
Which dependency is actually being constrained by version = "1.4.2"?
Options:
A. The backend or state format version
B. The examplecorp/network/aws module release
C. The hashicorp/aws provider plugin version
D. The Terraform CLI version for this root module
Best answer: B
Explanation: The version argument shown inside the module block constrains the module release because the source uses a registry-style module address. Provider versioning is separate and is already handled by the aws entry in required_providers.
Terraform treats modules and providers as different dependency types. In this snippet, source = "examplecorp/network/aws" is a registry-style module source, so version = "1.4.2" selects that module’s release. The AWS provider is constrained elsewhere: under terraform.required_providers, where hashicorp/aws is pinned with ~> 5.31.
If you wanted to constrain the Terraform CLI itself, you would use required_version in the terraform block. Backend and state versions are not controlled by the version argument in a module block.
The key distinction is that module versioning and provider versioning are separate, even when both appear in the same configuration.
required_providers, not in a module block.required_version setting in the terraform block.version does not version state.Topic: Terraform Fundamentals
A cloud engineer deleted a resource outside Terraform and recreated it manually. The configuration still uses the same resource address, but the remote object now has a different ID. Instead of using supported Terraform workflows, the engineer wants to open the state file in a text editor and replace the old ID by hand. Why is this risky?
Options:
A. It can map the resource address to the wrong object, causing misleading drift and bad lifecycle actions.
B. Manual edits are safe when the new object has the same type.
C. Terraform ignores manual state edits because plan reads only configuration.
D. State affects outputs only, not create, update, or destroy decisions.
Best answer: A
Explanation: Terraform state is the record Terraform uses to map each resource address to a specific real object. If you hand-edit that mapping carelessly, Terraform can misread drift and act on the wrong object during later plans or applies.
Terraform state is not just a cache. It stores the relationship between a resource address in configuration and the real remote object Terraform believes it manages, plus metadata used during planning. If you manually swap an ID or edit entries carelessly, Terraform may believe the wrong object belongs to that address. That can hide real drift, confuse future plans, and cause broken lifecycle behavior such as unexpected replacement or destruction.
plan compares configuration, state, and provider-read data.Because state directly affects those decisions, manual text editing is risky. Safer approaches use supported workflows such as terraform import or Terraform state commands.
plan reads only configuration fails because Terraform uses state to know what it already manages.Topic: Core Terraform Workflow
A team already initialized a Terraform working directory. They then change a child module block so its source points to a different registry module. Which command should they run before terraform plan?
Options:
A. terraform apply
B. terraform validate
C. terraform init
D. terraform fmt
Best answer: C
Explanation: terraform init prepares the working directory by installing providers, downloading modules, and configuring backend data. When a module source changes, Terraform must reinitialize so the new module code is available before planning.
The core concept is that terraform init is required not only the first time you use a working directory, but also when provider, backend, or module dependencies change materially. A changed child module source means Terraform may need to download different module code, so re-running init updates the working directory before terraform plan evaluates the configuration.
terraform validate checks whether the configuration is syntactically valid and internally consistent, but it does not install or update dependencies. terraform fmt only rewrites configuration files into standard formatting. terraform apply creates or updates infrastructure, but it should come after proper initialization and planning.
A good rule: if the configuration changes what Terraform must download or configure behind the scenes, run terraform init again.
Topic: HCP Terraform
A team uses a VCS-driven HCP Terraform workspace. During a run, they see this note:
HCP Terraform run summary
Project: production-apps
Workspace: web-prod
Execution mode: Remote
Plan: Finished
Policy checks: 2 passed, 1 failed
Apply status: Blocked
Which interpretation is best?
Options:
A. The project grouping feature requires manual approval before any run can proceed.
B. Drift detection found unmanaged infrastructure changes and automatically canceled the plan.
C. A policy set evaluated the run and blocked apply until the violation is resolved.
D. A private registry module failed publishing, so the workspace cannot continue the run.
Best answer: C
Explanation: The exhibit explicitly shows failed policy checks and a blocked apply. In HCP Terraform, that means governance rules were evaluated during the run and at least one policy failed, so the run cannot continue to apply.
This exhibit points to policy enforcement. In HCP Terraform, policy checks evaluate a run against governance rules, and a failed check can block the apply step.
The other governance-oriented features in the objective do different jobs:
The deciding evidence is Policy checks: 2 passed, 1 failed together with Apply status: Blocked. That combination means the run completed planning, then a governance policy stopped it from moving forward. The key takeaway is that policy checks control whether a run is allowed to continue; the other features organize, share, or observe infrastructure.
Topic: Terraform Configuration
A team reviews this Terraform configuration and wants the dynamic logic to stay readable for new engineers.
variable "create_bucket" {
type = bool
default = true
}
locals {
bucket_count = length(compact([
var.create_bucket ? "create" : ""
]))
}
resource "aws_s3_bucket" "logs" {
count = local.bucket_count
bucket = "example-logs"
}
Which next step best improves readability without changing the intent?
Options:
A. Keep the local because more functions are preferred
B. Move the resource into a child module
C. Replace count with a dynamic block
D. Use count = var.create_bucket ? 1 : 0 directly
Best answer: D
Explanation: The shown local turns a boolean into a temporary list, removes empty values, and then counts the result just to produce 1 or 0. A direct conditional for count is simpler and keeps the resource intent clear.
Terraform supports dynamic expressions and functions, but they should clarify the configuration rather than obscure it. In this example, the real intent is simple: create one bucket when create_bucket is true, otherwise create none. The compact(...) and length(...) chain technically works, but it hides that basic decision behind extra steps.
A clearer version is:
count = var.create_bucket ? 1 : 0
That keeps the primary infrastructure intent visible at the resource itself. Dynamic blocks are for generating nested blocks, not deciding whether a top-level resource exists, and a child module would add indirection without solving the readability problem.
count logic.Topic: Terraform Configuration
A team runs this configuration and sees intermittent apply failures because module.app starts before firewall rules created by module.network are ready. Provider credentials are valid, and module.app does not need any values from module.network.
module "network" {
source = "./modules/network"
}
module "app" {
source = "./modules/app"
service_name = "billing"
}
What is the best configuration fix?
Options:
A. Use terraform apply -target=module.network before each full apply
B. Run terraform init again to rebuild dependencies
C. Add depends_on = [module.network] to module "app"
D. Add another provider configuration inside module "app"
Best answer: C
Explanation: Terraform determines apply order from references and explicit dependencies. Because module.app has no reference to module.network, Terraform has no relationship information to enforce order, so an explicit depends_on is the right fix.
Terraform builds a dependency graph from expressions that reference other objects, such as using one module’s output in another module’s input. In this example, the two modules are separate in configuration, so Terraform can treat them as independent and run them without a guaranteed order.
When a real dependency exists but no data value is exchanged, use depends_on to declare that hidden relationship explicitly. That matches the stem: the problem is missing dependency information, not a provider or authentication issue.
Using depends_on is the durable fix because it updates the configuration itself instead of relying on a one-off command.
-target can force a one-time sequence, but it is a workaround rather than the normal way to model dependencies.Topic: Terraform Configuration
A team uses a VCS-driven HCP Terraform workspace. An engineer commits this Terraform excerpt to the repo:
variable "db_password" {
type = string
sensitive = true
default = "P@ssw0rd123!"
}
resource "aws_db_instance" "app" {
password = var.db_password
}
What is the best next step?
Options:
A. Remove the default and supply the password from HCP Terraform sensitive variables or Vault.
B. Keep the default because sensitive = true prevents the password from reaching state.
C. Keep the code and switch the workspace to a remote backend.
D. Add .tfvars files to .gitignore and keep the current variable default.
Best answer: A
Explanation: This configuration still hardcodes a secret, which is a Terraform anti-pattern. sensitive = true only redacts display output; it does not make a hardcoded default safe in source files, Git history, or state.
Best practice is to keep secrets out of Terraform configuration files entirely. In this excerpt, the password is embedded in the variable default, so anyone with access to the code repository or its history can retrieve it. Marking the variable as sensitive helps hide the value in some Terraform output, but it does not remove the secret from the .tf file and does not guarantee the value stays out of state when a resource uses it.
Safer patterns include:
A remote backend improves state storage and collaboration, but it does not fix a secret that is already hardcoded in configuration. The key takeaway is that redaction is not the same as secret management.
sensitive hides some output, but it does not protect a hardcoded default in code or history..tfvars files does nothing when the secret is embedded directly in the variable block.Topic: Terraform Modules
Review this configuration:
# root module
variable "project_name" {
default = "demo"
}
module "app" {
source = "./modules/app"
}
The child module ./modules/app declares variable "project_name" {} and uses var.project_name. Which statement is correct?
Options:
A. The root module must pass project_name = var.project_name in the module block.
B. terraform init copies root variables into child modules.
C. The child module automatically receives demo from the root variable.
D. Terraform state exposes root variables to every child module.
Best answer: A
Explanation: Terraform modules have separate variable scopes. A child module does not automatically inherit arbitrary root-module values, even if the variable names match, so the root must pass the value through the module block.
In Terraform, each module has its own input variable scope. A variable declared in the root module is available to the root module, but a child module can use that value only if the root passes it explicitly as a module input. Matching variable names alone do not share values across modules, and Terraform commands or state do not change that boundary.
module "app" {
source = "./modules/app"
project_name = var.project_name
}
That explicit input assignment is what makes the root value available inside the child module.
init misconception fails because terraform init installs providers and modules and prepares the working directory; it does not pass variable values.Topic: HCP Terraform
A platform team manages production changes in HCP Terraform. They want runs executed centrally, a person to approve production applies, and shared guardrails across teams.
Exhibit:
Workspace: app-prod
Execution mode: Remote
Run trigger: VCS-driven
Apply method: Manual apply
Policy checks: Enabled
Which interpretation is most accurate?
Options:
A. Reusable modules alone enforce approval gates across team workspaces.
B. Local CLI workflows provide the same centralized approvals and governance controls.
C. HCP Terraform provides remote runs, policy checks, and manual approval gates.
D. A remote backend alone provides approvals and organization-wide policy checks.
Best answer: C
Explanation: The exhibit describes HCP Terraform collaboration and governance features, not just core local Terraform CLI behavior. Remote execution, VCS-driven runs, policy checks, and manual apply together support centralized control, shared guardrails, and human approval before infrastructure changes are applied.
HCP Terraform adds team-oriented workflow controls that the local CLI does not provide by itself. In the exhibit, Execution mode: Remote means runs occur in HCP Terraform, Run trigger: VCS-driven ties runs to version-controlled changes, Policy checks: Enabled indicates governance rules can evaluate the run, and Apply method: Manual apply requires a person to approve the apply step. Together, these features address collaboration, approvals, and shared standards in a centralized workflow.
Local commands such as terraform plan, terraform apply, terraform validate, and terraform fmt are still useful, but they do not by themselves create an HCP-managed approval gate or organization-wide policy enforcement. A remote backend or reusable module can help with state or reuse, but neither replaces HCP Terraform governance features.
Topic: Maintain Infrastructure with Terraform
A teammate enabled verbose logging to troubleshoot remote state initialization and wants to paste the full log into an incident ticket that many contractors can view. Based on this excerpt, what is the best next step?
$ TF_LOG=TRACE terraform init
2026-04-07T10:14:32Z [TRACE] Meta.Backend: built configuration for "http"
2026-04-07T10:14:32Z [DEBUG] GET https://state.example.internal/prod/terraform.tfstate
2026-04-07T10:14:32Z [DEBUG] Authorization: Bearer eyJhbGciOi...
2026-04-07T10:14:32Z [TRACE] state path: prod/networking/terraform.tfstate
Options:
A. Share only a redacted excerpt through a restricted channel.
B. Post the full trace because Terraform masks verbose logs.
C. Commit the trace with the configuration for team review.
D. Replace the token and share the remaining trace broadly.
Best answer: A
Explanation: The excerpt already shows a bearer token and a state path, so the log must be treated as sensitive. Verbose Terraform logs can reveal credentials, backend details, and other secret-bearing data, so the safest action is to redact and limit who can see it.
TF_LOG=TRACE is useful for troubleshooting, but detailed Terraform logs can contain backend URLs, authentication headers, state object locations, request details, and other sensitive operational data. In this excerpt, both a bearer token and a state path are visible, so pasting the raw log into a widely visible ticket creates unnecessary exposure.
A safer approach is to:
Marking variables or outputs as sensitive helps reduce normal CLI display, but it does not guarantee every verbose log line is sanitized. The key takeaway is to handle detailed Terraform logs like any other secret-bearing artifact.
Topic: Infrastructure as Code (IaC) with Terraform
A team stores Terraform for production in Git, and the repository requires pull request approval before merging to main. Their HCP Terraform workspace is configured as shown:
Workspace: prod-network
VCS repository: acme/networking
Tracked branch: main
Pull requests: speculative plans before merge
Run history: retained in workspace
Which governance benefit is best supported by this setup?
Options:
A. Changes are reviewed before apply and remain traceable later.
B. State files are no longer needed for this workspace.
C. Drift is automatically prevented and corrected in all environments.
D. Failed infrastructure changes are automatically rolled back.
Best answer: A
Explanation: This setup improves governance by putting Terraform changes in version control, requiring peer review through pull requests, and preserving workspace run history. That creates approval checkpoints before production changes and a clear record of what changed, when, and from which code revision.
Terraform governance improves when infrastructure changes follow the same controls as application code. Here, the configuration is stored in Git, pull requests are reviewed before merge, HCP Terraform generates speculative plans for those pull requests, and the workspace keeps run history. Together, those features make changes both reviewable and traceable: reviewers can inspect the code diff and the proposed plan before merge, and teams can later audit which commit led to a given run or apply.
This strengthens governance and accountability, but it does not by itself guarantee zero drift or automatic rollback.
Topic: Core Terraform Workflow
Which sequence correctly describes the core Terraform workflow for a new working directory?
Options:
A. Write configuration, run terraform init, run terraform plan, review changes, then run terraform apply
B. Write configuration, run terraform validate, inspect state, then run terraform apply
C. Write configuration, run terraform init, run terraform apply, then review the resulting changes
D. Write configuration, run terraform plan, review changes, run terraform init, then run terraform apply
Best answer: A
Explanation: The core Terraform workflow is to write configuration, initialize the directory, generate a plan, review that plan, and then apply it. This order ensures Terraform is ready to work with providers and backends before proposing and making infrastructure changes.
Terraform’s core workflow is designed to make infrastructure changes predictable. After writing configuration, terraform init prepares the working directory by setting up required providers, modules, and backend settings. Then terraform plan compares the configuration with the current state and real infrastructure to show the proposed actions. You review that plan to confirm the intended creates, updates, or destroys before making any changes. Finally, terraform apply performs the approved actions.
Helpful commands like terraform fmt and terraform validate can improve configuration quality, but they do not replace initialization, planning, review, and apply in the main workflow.
terraform apply is too late, because changes have already been made.terraform plan before terraform init is incorrect because the directory is not initialized yet.terraform validate and state inspection are useful checks, but they do not replace generating and reviewing a plan.Topic: Core Terraform Workflow
An engineer generated the following plan and wants to skip review and run terraform apply -auto-approve during a short maintenance window.
# aws_instance.web must be replaced
-/+ resource "aws_instance" "web" {
~ ami = "ami-0abc" -> "ami-0def" # forces replacement
id = "i-0123456789"
}
Plan: 1 to add, 0 to change, 1 to destroy.
Based on this output, what is the best next step?
Options:
A. Run terraform validate because syntax checks confirm a safe apply
B. Apply now because the plan shows zero net resource growth
C. Review why the instance will be replaced before applying
D. Re-run terraform init because replacement needs reinitialization
Best answer: C
Explanation: This plan shows a replacement, not a harmless update. In Terraform output, -/+ means destroy and create, so applying without reviewing the plan can cause unwanted changes even if the total resource count stays the same.
A successful plan does not mean the changes are safe or desired. In this plan, the ami change is marked # forces replacement, and the -/+ prefix confirms that Terraform will replace aws_instance.web rather than update it in place. If the engineer runs terraform apply -auto-approve without understanding that plan, Terraform will destroy the existing instance and create a new one.
Before applying, the right action is to review why replacement is planned and confirm that the disruption is intentional. This is exactly why reading the plan matters before approval or apply, especially when using -auto-approve. A net-zero resource count can still hide a destructive change.
terraform init prepares the working directory, providers, modules, and backend; it is not required just because a resource is being replaced.terraform validate checks configuration correctness, not whether the planned infrastructure changes are acceptable.Topic: Maintain Infrastructure with Terraform
A team refactored resources into modules and wants to confirm the exact resource addresses Terraform is currently tracking in state, without changing infrastructure. Which CLI command should they run?
Options:
A. terraform import
B. terraform state show
C. terraform state list
D. terraform validate
Best answer: C
Explanation: terraform state list is the inspection command for seeing which resource addresses are currently recorded in state. It helps confirm current state mappings after refactoring, without changing infrastructure.
Terraform’s state subcommands are used to inspect and work with the resources Terraform already tracks in state. When the goal is to see the current resource mappings by address, terraform state list is the appropriate command because it outputs the addresses stored in state, including module paths when applicable.
If you instead need details for one tracked object, use terraform state show with a specific address. Commands like import and validate serve different purposes: one brings existing infrastructure under state management, and the other checks configuration syntax and internal consistency. The key distinction is that state list answers “what addresses are in state right now?”
terraform state show, but that command expects a single resource address and returns its stored attributes.terraform import, which changes state rather than inspecting current mappings.terraform validate, which evaluates configuration files, not stored state entries.Topic: Terraform State Management
A user runs terraform apply and receives an error that Terraform cannot acquire the state lock because another operation is already holding it. Which Terraform concept best explains this situation?
Options:
A. The configuration failed Terraform validation.
B. Drift has been detected between state and infrastructure.
C. The dependency lock file has invalid provider selections.
D. State locking coordinates shared access to state.
Best answer: D
Explanation: A state lock error means Terraform is preventing concurrent operations from changing the same state at the same time. That is a shared-state coordination issue, not evidence that the HCL configuration itself is invalid.
Terraform state is a shared record of managed infrastructure, so concurrent writes can corrupt or desynchronize it. State locking exists to prevent two users or runs from updating the same state file at once. When Terraform waits for a lock or fails to acquire one, the problem is usually that another operation is already using that state, or a previous lock was not released cleanly.
This is different from configuration problems, which are identified by commands like terraform validate or by plan/apply errors about arguments, references, or provider behavior. It is also different from drift, which means the real infrastructure no longer matches the recorded state. The key takeaway is that a lock-related message points first to state coordination and backend locking behavior.
Topic: Terraform Fundamentals
Which Terraform block is used to configure provider-specific settings such as region, API endpoint, or authentication-related inputs?
Options:
A. backend block
B. terraform block
C. resource block
D. provider block
Best answer: D
Explanation: Terraform uses the provider block to define how it should connect to and interact with a specific platform or service. Settings like region, custom endpoints, and authentication-related inputs belong there, not in blocks for state, version requirements, or managed objects.
Terraform separates provider selection from provider configuration. The terraform block can declare required providers and version constraints, but the provider block is where you set the provider-specific values Terraform needs in order to talk to that platform, such as region, endpoint, or authentication inputs. Resources then rely on that provider configuration when Terraform plans and applies changes.
provider "aws" {
region = "us-east-1"
}
A useful memory aid is: the provider block tells Terraform how to connect, while a resource block tells Terraform what to manage.
terraform block is for settings such as required_providers and backends, not provider runtime configuration.backend block controls where state is stored and locked, not how Terraform connects to a cloud or service API.resource block defines an infrastructure object to manage, not shared provider-wide settings.Topic: Terraform Configuration
Which statement correctly describes a standard way Terraform receives an input variable value?
Options:
A. Terraform stores input variable values in .terraform.lock.hcl.
B. An environment variable named TF_VAR_region can set variable "region" {}.
C. Values in outputs.tf are automatically reused as input variables.
D. Any environment variable starting with TF_ can set an input variable.
Best answer: B
Explanation: Terraform supports several common input variable sources, including defaults, variable definition files, command-line flags, and environment variables. For environment variables, the recognized format is TF_VAR_<variable_name>, so TF_VAR_region is valid for a variable named region.
Input variables in Terraform can be supplied in several standard ways: a default in the variable block, a variable definition file such as terraform.tfvars or .auto.tfvars, a command-line flag like -var, or an environment variable. When using environment variables, Terraform requires a strict naming pattern: TF_VAR_ followed by the exact input variable name.
That is why TF_VAR_region correctly sets variable "region" {}. Other TF_ names are not automatically treated as input variables, and files like .terraform.lock.hcl or outputs.tf serve different purposes.
The key distinction is between actual variable input mechanisms and other Terraform configuration or metadata files.
TF_ prefix is not enough; Terraform only maps TF_VAR_<name> to input variables..terraform.lock.hcl tracks provider dependency selections, not variable values.outputs.tf defines output values from a configuration, not input values for future runs.Topic: Terraform Configuration
A module variable must accept a value with fixed, named attributes: name as a string, instance_count as a number, and enable_monitoring as a bool. The attribute names are known in advance, and each attribute can have a different type. Which Terraform type constraint is most appropriate?
Options:
A. object({ name = string, instance_count = number, enable_monitoring = bool })
B. tuple([string, number, bool])
C. map(string)
D. set(any)
Best answer: A
Explanation: Use an object when a value has known attribute names and those attributes can have different types. This matches a structured input like name, instance_count, and enable_monitoring better than positional or same-type collections.
Terraform complex types are chosen by data shape. Use object when the input is a structured value with specific attribute names, and those attributes may have different types. Here, the value has three known fields: a string, a number, and a boolean.
map(...) is for string keys whose values are all the same type.tuple([...]) allows mixed types, but elements are identified by position rather than by name.set(...) is an unordered collection of unique elements, not a record with fixed fields.The deciding factor is named attributes with mixed value types, which is exactly what object is for.
map(string) fails because all map values must be strings, but the input includes a number and a boolean.tuple([string, number, bool]) supports mixed types, but it uses positional elements instead of named attributes.set(any) represents an unordered collection of unique elements, not a structured value with predefined fields.Topic: Terraform Fundamentals
A Terraform root module must manage AWS resources in two regions and Azure resources in the same configuration. Which approach is correct?
Options:
A. Use workspaces to combine multiple provider contexts in one configuration.
B. Use required_providers entries only; Terraform creates the needed instances automatically.
C. Use separate backends so each platform gets its own provider selection.
D. Use separate provider blocks, add alias for extra instances, and reference them explicitly.
Best answer: D
Explanation: Terraform uses provider blocks to configure actual provider instances. When the same provider is needed more than once, such as AWS in multiple regions, you add an alias and point resources or modules to the right instance. This is different from required_providers, backends, or workspaces.
Terraform separates provider requirements from provider configuration. The required_providers block tells Terraform which provider plugins and versions are needed, but provider blocks create the usable provider instances. If you need multiple platforms, declare each provider. If you need multiple instances of the same provider, declare another provider block with an alias, then select it from resources with the provider meta-argument or pass it into a module.
provider "aws" { region = "us-east-1" }
provider "aws" { alias = "west" region = "us-west-2" }
provider "azurerm" { features {} }
A backend manages state storage, and workspaces separate state, but neither one chooses provider instances.
required_providers installs plugins and sets version constraints, but it does not configure usable provider instances.Topic: HCP Terraform
A platform team is designing its HCP Terraform organization.
Project: retail-app
Workspaces: retail-dev, retail-stage, retail-prod
Project: shared-platform
Workspaces: network-prod, dns-prod
Goal: group by application/team ownership
Constraint: keep each workspace state separate
Based on this note, which interpretation is best?
Options:
A. Put every workspace in one project because projects should not reflect ownership boundaries.
B. Use variable sets instead of projects to separate applications and teams.
C. Use one workspace per project so all environments share one state file.
D. Group related workspaces into projects, but keep separate state per workspace.
Best answer: D
Explanation: In HCP Terraform, projects are an organizational layer above workspaces. They help group related workspaces by application, environment set, or ownership boundary without merging separate state into one workspace.
A project in HCP Terraform is used to organize related workspaces, not to replace them. In the exhibit, retail-dev, retail-stage, and retail-prod belong together because they support the same application, while network-prod and dns-prod belong to a different ownership boundary. Keeping them as separate workspaces preserves separate Terraform state for each independently managed environment or stack.
Typical use is to:
The key takeaway is that projects provide structure across workspaces, while workspaces remain the unit that holds state and runs Terraform.
Topic: Terraform State Management
A team uses one shared Terraform state for a production environment. Multiple engineers or automated runs might start terraform apply against that same state around the same time. Why is state locking especially important in this workflow?
Options:
A. It stores provider plugins in the backend for consistent installs.
B. It guarantees the real infrastructure always matches the configuration.
C. It ensures only one operation can update the shared state at a time.
D. It automatically splits the state into separate files per resource.
Best answer: C
Explanation: State locking protects Terraform state when multiple users or runs target the same deployment. It prevents simultaneous state updates, which helps avoid conflicting writes, failed runs, and corrupted shared state.
State locking is a safeguard for shared Terraform state during operations that may change it. In a collaborative workflow, two people or an automated run and a human operator could both start work from the same current state and then try to write back changes. Without locking, those concurrent updates can conflict and leave the state inaccurate or inconsistent. A backend that supports locking lets Terraform allow one state-changing operation at a time, preserving the integrity of the shared source of truth. Locking does not manage plugins, reorganize state files, or guarantee that infrastructure never drifts; it specifically protects state during concurrent access.
Topic: Infrastructure as Code (IaC) with Terraform
A platform team supports dev, stage, and prod with nearly identical Terraform code copied into separate directories. They want faster delivery, consistent infrastructure across environments, and an auditable review step before changes are applied by multiple engineers.
What is the best IaC pattern to adopt?
Options:
A. Keep separate root configurations and rely on terraform fmt before each local apply
B. Share one local state file across all environments to keep resources synchronized
C. Replace repeated resources with data sources so each environment reads existing infrastructure
D. Create a reusable module, pin its version, and use VCS-driven HCP Terraform workspaces per environment
Best answer: D
Explanation: The best pattern is to move the repeated configuration into a reusable module and consume that module from separate environment workspaces. Pairing that with VCS-driven HCP Terraform runs adds an auditable review process, shared remote state handling, and more predictable changes across teams.
This scenario is asking for an IaC pattern that improves reuse, consistency, collaboration, and auditability at the same time. A reusable Terraform module lets the team define the common infrastructure once and use it across dev, stage, and prod, which reduces copy-paste drift. Pinning the module version makes rollouts more predictable because each environment can intentionally adopt a known version.
Using VCS-driven HCP Terraform workspaces adds the review and collaboration workflow:
The closest distractors either improve formatting only, misuse data sources, or reduce state safety instead of improving it.
terraform fmt improves style, but it does not create reusable configuration, shared review, or safer collaboration.Topic: Terraform Configuration
A team uses the same Terraform module in dev and prod HCP Terraform workspaces. They need each resource name to be <app>-<workspace> in lowercase, tags to combine shared and environment-specific values, and all changes to go through the normal HCP Terraform plan review before apply. A developer says this requires new provider behavior because the cloud API must build the final values. What is the best next action?
Options:
A. Enter the final names and full tag maps separately in each workspace as manual variables.
B. Open a provider feature request so the API can compose names and tags during create and update calls.
C. Create a Sentinel policy set that generates the names and tags during each run.
D. Add locals that use format(), lower(), and merge(), reference them in resource arguments, and review the HCP Terraform plan.
Best answer: D
Explanation: This is a Terraform language problem, not a provider capability problem. Use HCL expressions and functions in reusable configuration to derive the final values, then let HCP Terraform show the exact result in the plan before apply.
Terraform language features such as expressions, locals, and functions are used to construct values inside configuration. Terraform evaluates those values during planning and then passes the final strings, lists, or maps to the provider. In this scenario, lowercase names and merged tags are value-construction requirements, so the right solution is to define derived values in HCL and reuse them across environments.
format() and lower().merge().The key takeaway is that providers handle API interactions, while HCL expressions and functions handle value construction.
Topic: Core Terraform Workflow
Which situation is the most appropriate use of terraform destroy?
Options:
A. Removing a temporary development environment that is no longer needed
B. Repairing state after a resource was removed from state by mistake
C. Reconciling drift after a manual change to a managed resource
D. Re-downloading providers after changing a provider version
Best answer: A
Explanation: Use terraform destroy when the goal is to intentionally remove Terraform-managed infrastructure, such as a disposable sandbox or test environment. It is a teardown command, not a tool for drift correction, state repair, or provider installation.
The key concept is intent. terraform destroy creates and applies a plan to delete the Terraform-managed resources in the current configuration scope, so it fits environments that are meant to be temporary. Common examples include short-lived development, test, or demo stacks that should be torn down when work is finished.
If resources have drifted because someone changed them manually, the usual workflow is to run terraform plan and then terraform apply to bring real infrastructure back in line with configuration. If the state is wrong or incomplete, use safer state workflows such as import, moved blocks, or careful terraform state commands. Provider installation or upgrades are handled by terraform init.
A good rule is: destroy is for intentional deletion, not correction or repair.
terraform init, not by removing resources.Topic: Core Terraform Workflow
A team stores Terraform code in Git and uses pull requests for review. An engineer updated a root module and several local child modules, and the code should be normalized to a consistent style across all directories before review. The team will check configuration correctness separately. What is the best next action?
Options:
A. Enable HCP Terraform speculative plans
B. Run terraform plan
C. Run terraform validate
D. Run terraform fmt -recursive
Best answer: D
Explanation: The goal is code style normalization, not correctness checking. terraform fmt -recursive standardizes Terraform file formatting across the root module and child module directories, which supports consistent pull request review.
Terraform separates formatting from validation. terraform fmt is used to rewrite configuration files into Terraform’s canonical style, such as spacing and indentation. Adding -recursive applies that formatting through the directory tree, which is useful when a change touched multiple local modules.
terraform validate checks whether a configuration is syntactically valid and internally consistent, but it does not normalize style. terraform plan compares configuration with state and proposed infrastructure changes, which is a later workflow step. HCP Terraform speculative plans help with collaborative review of proposed changes, but they do not reformat source files. The key distinction is that formatting makes code consistent, while validation checks correctness.
terraform validate checks correctness, not code formatting.terraform plan is for reviewing proposed infrastructure changes, not normalizing HCL style.Topic: Core Terraform Workflow
After the working directory has already been initialized, which Terraform command is intended to catch configuration errors early in local development or CI before you create an execution plan?
Options:
A. terraform init
B. terraform validate
C. terraform fmt
D. terraform plan
Best answer: B
Explanation: terraform validate is the dedicated early check for Terraform configuration correctness. It verifies syntax and internal consistency in an initialized working directory, making it a common step in local development and CI before later workflow commands.
The key concept is command purpose. terraform validate is used to verify that Terraform configuration files are syntactically valid and internally consistent, such as correct block structure, valid references, and acceptable argument placement. It is designed as a fast quality check before planning or applying changes, so it fits naturally into local development and CI pipelines.
In a typical workflow:
terraform init prepares the directoryterraform validate checks configuration correctnessterraform plan previews proposed changesThe closest distractor is terraform plan, because it can reveal problems, but its main job is to generate an execution plan rather than serve as the primary early validation step.
terraform fmt is for code formatting, not for checking whether the configuration is logically valid.terraform plan can surface issues, but it is primarily for previewing infrastructure changes.terraform init prepares the working directory by installing providers and modules; it does not validate configuration correctness.Topic: Infrastructure as Code (IaC) with Terraform
A platform team wants to hide provider-specific resource details behind a reusable building block so the same Terraform workflow can be used for AWS, Azure, or on-premises deployments. Which Terraform concept is designed for that purpose?
Options:
A. Backend
B. Provider
C. Workspace
D. Module
Best answer: D
Explanation: A module is Terraform’s reuse mechanism for infrastructure code. It lets teams package a pattern once and call it repeatedly with different inputs, which supports service-agnostic automation across clouds and hybrid environments.
Modules are Terraform’s primary way to create reusable infrastructure automation patterns. A module groups resources, variables, and outputs into a single building block that can be called multiple times. In a multi-cloud or hybrid scenario, that matters because the goal is to reuse the Terraform pattern even when provider-specific resources differ underneath. A root module can call child modules and pass different values or provider settings for different targets.
Providers define how Terraform interacts with a platform, backends determine where state is stored, and workspaces separate state instances. Those are important workflow concepts, but they do not package reusable infrastructure logic. The key takeaway is that reusable automation patterns belong in modules.
Topic: Terraform Configuration
Terraform usually infers resource order from expression references. If one resource must be created only after another, but there is no attribute reference between them, which Terraform mechanism should you add to the configuration?
Options:
A. Run terraform apply -target for the first resource.
B. Add a depends_on meta-argument to the dependent resource.
C. Add a lifecycle block to the dependent resource.
D. Add an output block for the first resource.
Best answer: B
Explanation: Terraform automatically builds most dependencies from references in configuration. When no reference exists but ordering still matters, depends_on is the explicit way to tell Terraform about that relationship.
Terraform normally creates an execution graph by following references such as one resource using another resource’s attribute. That is called an implicit dependency. If no such reference exists, Terraform cannot safely assume an ordering relationship on its own.
In that case, use the depends_on meta-argument on the resource or module that must wait. This adds an explicit dependency to the graph so Terraform plans and applies in the required order.
A lifecycle block changes behavior such as replacement handling, not dependency inference. An output block exposes values after apply, and -target is a special-purpose CLI option, not the normal way to model dependencies in configuration.
lifecycle confusion fails because lifecycle settings control resource behavior, not explicit ordering between unrelated blocks.output confusion fails because outputs publish values; they do not create ordering unless another block actually references them.-target misuse fails because targeted apply is a manual exception and not the correct way to define ongoing dependencies.Topic: Terraform State Management
A team updates its Terraform configuration to replace a local backend with a remote backend in the backend block. Before planning any infrastructure changes, which command should they run so Terraform can initialize the new backend and migrate or reconnect state safely?
Options:
A. terraform apply
B. terraform validate
C. terraform init
D. terraform plan
Best answer: C
Explanation: When backend configuration changes, Terraform must reinitialize the working directory before other workflow steps. The initialization step prepares the new backend and can prompt to migrate existing state or reconnect to the correct state location safely.
The core concept is backend reinitialization. A Terraform backend determines where state is stored, so changing the backend block is not just a syntax change; Terraform must set up the new backend and confirm how state should be handled.
terraform init is the command that initializes or reinitializes the working directory. When the backend configuration has changed, init detects that change and prompts for the appropriate safe action, such as migrating existing state to the new backend or reconfiguring the workspace to use the new state location.
By contrast, planning and applying are for infrastructure changes after initialization is complete, and validation only checks configuration correctness. The key takeaway is that backend changes require terraform init first.
terraform plan depends on an initialized backend and does not perform backend migration.terraform apply makes infrastructure changes, not backend setup or state relocation.terraform validate checks configuration structure, not backend initialization or state handling.Use the Terraform Associate (004) Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try Terraform Associate (004) on Web View Terraform Associate (004) Practice Test
Read the Terraform Associate (004) Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.