Terraform Associate (004): Terraform Configuration

Try 10 focused Terraform Associate (004) questions on Terraform Configuration, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try Terraform Associate (004) on Web View full Terraform Associate (004) practice page

Topic snapshot

FieldDetail
Exam routeTerraform Associate (004)
Topic areaTerraform Configuration
Blueprint weight22%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Terraform Configuration for Terraform Associate (004). Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 22% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Terraform Configuration

Which Terraform lifecycle setting tells Terraform to attempt creating a replacement resource before destroying the existing one, helping reduce replacement risk or downtime during a change?

Options:

  • A. create_before_destroy

  • B. depends_on

  • C. ignore_changes

  • D. prevent_destroy

Best answer: A

Explanation: create_before_destroy is a Terraform lifecycle rule used when a resource must be replaced but you want to reduce disruption. It tells Terraform to prefer creating the new instance first, then destroying the old one if the platform allows that order.

Terraform lifecycle settings adjust how Terraform handles resource operations. The specific rule for lowering downtime during a required replacement is create_before_destroy, which reverses the usual destroy-then-create order and asks Terraform to create the new resource first. This is useful when a configuration change forces replacement and you want to reduce service interruption or replacement risk.

It does not guarantee zero downtime in every case, because provider or platform constraints can still prevent both old and new resources from existing at the same time. The key idea is that this rule changes replacement behavior, not dependency wiring or change detection.

  • prevent_destroy blocks Terraform from destroying a resource, which protects it from deletion but does not help with safe replacement ordering.
  • ignore_changes tells Terraform to disregard selected attribute differences, which can reduce updates but does not control replacement sequencing.
  • depends_on creates an explicit dependency between resources, affecting order across resources rather than create-versus-destroy behavior for one resource.

Question 2

Topic: Terraform Configuration

A Terraform variable must accept a structured value with named attributes and predefined attribute types, such as:

{
  name  = "web"
  size  = "small"
  count = 2
}

Which Terraform type constraint is the best match?

Options:

  • A. object({ name = string, size = string, count = number })

  • B. list(string)

  • C. tuple([string, string, number])

  • D. set(string)

Best answer: A

Explanation: Use object when a value has named fields and each field has a defined type. The example uses attribute names like name, size, and count, so an object type constraint matches the structure directly.

In Terraform, object is the complex type used for structured data with named attributes. Each attribute can have its own declared type, which makes object({ name = string, size = string, count = number }) a direct match for the example value.

A tuple can also mix types, but it is positional, not named. list and set are collections of similar element types, so they do not model a record with specific attribute names. When a configuration needs a single structured value with predictable fields, object is the most precise choice.

The key distinction is named attributes versus positional or repeated elements.

  • Tuple confusion fails because tuple defines values by position, not by attribute names.
  • List confusion fails because list(string) requires all elements to be strings in an ordered sequence.
  • Set confusion fails because set(string) is an unordered collection of unique strings, not a named structure.

Question 3

Topic: Terraform Configuration

A team adds custom conditions to a resource:

data "aws_ami" "selected" {
  owners      = ["self"]
  most_recent = true
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.selected.id
  instance_type = "t3.micro"

  lifecycle {
    precondition {
      condition     = data.aws_ami.selected.architecture == "x86_64"
      error_message = "AMI must be x86_64."
    }

    postcondition {
      condition     = self.public_ip != ""
      error_message = "Instance must have a public IP."
    }
  }
}

What is the best interpretation of this configuration?

Options:

  • A. The precondition verifies an assumption before the resource is created, and the postcondition verifies an expected result after creation.

  • B. These are check assertions, so a failure would only produce a warning during plan or apply.

  • C. Both blocks are evaluated only after apply because Terraform must create the instance before either test can run.

  • D. Both blocks act like variable validation and only test whether user input has the right format.

Best answer: A

Explanation: This snippet uses two different custom conditions for a resource. The precondition checks an assumption Terraform must trust before creating the instance, and the postcondition checks that the finished resource has the expected outcome.

Terraform supports several custom condition types, and they serve different purposes. A validation block checks input values in a variable block. A precondition checks an assumption before Terraform completes work on a resource, data source, or output. A postcondition checks an expected result after that object has been evaluated or created.

In this example, Terraform first verifies that the selected AMI uses the x86_64 architecture. After the instance is created, Terraform verifies that self.public_ip is not empty. If either condition fails, Terraform raises an error instead of just reporting a warning. By contrast, a top-level check block is used for broader assertions and does not behave like a resource precondition or postcondition.

The key distinction is assumption before action versus expected outcome after action.

  • Not input validation: variable validation belongs in variable blocks and checks provided input values, not resource behavior.
  • Not a check block: check uses top-level check and assert blocks, and it is a different feature from resource preconditions and postconditions.
  • Not both after apply: the architecture test is a precondition, so Terraform evaluates it before proceeding with the resource.

Question 4

Topic: Terraform Configuration

A team keeps one Terraform configuration in Git and uses separate HCP Terraform workspaces for dev and prod with VCS-driven runs. Only the instance size and instance count differ by environment. They want reusable code, consistent reviewable plans, and no hardcoded environment values. What is the best next step?

Options:

  • A. Define input variables and set workspace values in HCP Terraform.

  • B. Create local values for each environment inside the configuration.

  • C. Expose the environment settings as output values.

  • D. Pass -var flags from each engineer’s local machine.

Best answer: A

Explanation: Input variables are Terraform’s standard way to accept environment-specific or user-supplied values. Here, the team can keep one reusable configuration and let each HCP Terraform workspace provide its own values while preserving shared, reviewable runs.

Input variables make Terraform configurations reusable by separating the code from the values that change between environments. You define the variable once, reference it in resource arguments, and then supply different values for different runs. In this scenario, that means dev and prod can use the same VCS-connected configuration while each HCP Terraform workspace provides its own instance size and instance count.

  • Define variable blocks for the changing settings.
  • Reference those variables in the configuration.
  • Set different values in each workspace.
  • Let HCP Terraform generate and review the plan normally.

The key idea is that variables accept input; they are the mechanism for avoiding hardcoded environment settings while keeping the configuration consistent.

  • Local values still keep the environment-specific settings inside the configuration instead of accepting them from outside.
  • Outputs are for exposing values after evaluation, not for supplying values into the configuration.
  • Local -var flags are a poor fit for shared VCS-driven HCP Terraform runs because they rely on individual operators instead of workspace-managed values.

Question 5

Topic: Terraform Configuration

A team runs the same Terraform configuration in separate HCP Terraform workspaces for dev and prod. The configuration creates the network resources in each workspace, and reviewers want plans to show the app server using the correct subnet automatically if that subnet is recreated or gets a different ID in another environment.

resource "aws_subnet" "app" {
  vpc_id     = aws_vpc.main.id
  cidr_block = var.app_subnet_cidr
}

resource "aws_instance" "web" {
  ami       = var.ami_id
  subnet_id = "subnet-0ab12345"
}

What is the best next change?

Options:

  • A. Set subnet_id = var.app_subnet_id

  • B. Keep the literal and add depends_on

  • C. Use an HCP Terraform variable set for the subnet ID

  • D. Set subnet_id = aws_subnet.app.id

Best answer: D

Explanation: Use a resource attribute reference when one Terraform-managed resource needs a value from another Terraform-managed resource. Referencing aws_subnet.app.id keeps the configuration reusable across environments and lets Terraform infer the dependency for accurate plans and applies.

When a resource in Terraform needs an ID or other value produced by another resource in the same configuration, the safest choice is an attribute reference such as aws_subnet.app.id. That makes the configuration reusable because each workspace or environment uses the subnet actually created there, not a copied literal from somewhere else. It also creates an implicit dependency, so Terraform can build the graph correctly and show meaningful plan output before apply.

A hardcoded subnet ID is brittle across environments and after resource recreation. An input variable or HCP Terraform variable set is better for values that come from outside the configuration, not for values Terraform already knows because it created the resource itself. Adding depends_on only affects ordering; it does not replace the bad literal or make the value environment-safe.

  • Input variable treats the subnet ID as external input, even though the subnet is already created in the same configuration.
  • Only depends_on may affect ordering, but it still leaves the hardcoded subnet ID in place.
  • Variable set is useful for shared external inputs, not for per-workspace resource IDs Terraform can reference directly.

Question 6

Topic: Terraform Configuration

Review this Terraform configuration:

module "network" {
  source = "./modules/network"
}

module "app" {
  source    = "./modules/app"
  subnet_id = module.network.subnet_id
}

module "checks" {
  source     = "./modules/checks"
  depends_on = [module.app]
}

Which statement best describes the dependencies Terraform will use?

Options:

  • A. module.checks uses depends_on to read outputs from module.app.

  • B. module.app and module.checks both depend implicitly on earlier blocks by file order.

  • C. module.app depends implicitly on module.network; module.checks depends explicitly on module.app.

  • D. module.app needs depends_on to depend on module.network.

Best answer: C

Explanation: Terraform builds a dependency graph from references and from explicit depends_on settings. Here, the reference to module.network.subnet_id creates an implicit dependency for module.app, while module.checks uses an explicit dependency on module.app.

Terraform determines operation order from its dependency graph, not from the order blocks appear in a file. A reference like module.network.subnet_id inside module.app automatically creates an implicit dependency, so Terraform knows module.app must wait for module.network. By contrast, depends_on = [module.app] in module.checks is an explicit dependency: it tells Terraform to wait for module.app even though module.checks does not consume any value from it.

Use references when one object needs another object’s data. Use depends_on when the relationship is about ordering only and there is no direct reference to create that dependency automatically. The key takeaway is that depends_on affects execution order, not value passing.

  • File order misconception fails because Terraform does not use top-to-bottom block order to decide dependencies.
  • Extra depends_on on module.app fails because the subnet_id reference already creates the dependency on module.network.
  • Data flow confusion fails because depends_on controls ordering only; inputs and outputs handle value passing.

Question 7

Topic: Terraform Configuration

A team manages an app server with Terraform in a VCS-driven HCP Terraform workspace. A configuration change will force the server to be replaced on the next run. They must keep the change in code, allow normal plan review, and reduce downtime; the provider allows the old and new servers to exist at the same time. What is the best next action?

Options:

  • A. Run a local terraform apply -replace=....

  • B. Add lifecycle { create_before_destroy = true } to the resource.

  • C. Remove the resource from state before the next run.

  • D. Add depends_on so Terraform orders the replacement safely.

Best answer: B

Explanation: Use the lifecycle meta-argument create_before_destroy when a resource must be replaced but you want to lower downtime risk. It keeps the behavior in version-controlled configuration and still fits the normal HCP Terraform plan-and-apply review process.

create_before_destroy is the Terraform lifecycle setting used when a change forces resource replacement and you want the new object created before the old one is removed. In this scenario, the team already has plan review through its VCS-driven HCP Terraform workflow, so the missing requirement is safer replacement behavior in code. Because the provider allows both old and new objects to exist at the same time, Terraform can usually create the replacement first and then destroy the original, which reduces outage risk.

  • Add the lifecycle rule on the resource that is being replaced.
  • Keep the change in HCL so it can be reviewed and reused across environments.
  • Then run the normal reviewed plan and apply flow.

By contrast, dependency settings or manual replacement commands do not change replacement order in the same safe, reusable way.

  • The depends_on idea is tempting, but it manages dependencies between objects rather than the replacement order of one resource.
  • A local -replace apply can bypass the existing HCP Terraform review workflow and still does not ensure create-before-delete behavior.
  • Removing a resource from state is unsafe here because it stops Terraform from tracking the object instead of reducing replacement downtime.

Question 8

Topic: Terraform Configuration

A Terraform configuration needs to look up an existing network created outside the current configuration and use its ID in new resources, but it must not create or manage that network. Which block type should be used?

Options:

  • A. output block

  • B. resource block

  • C. data block

  • D. module block

Best answer: C

Explanation: Use a data block when Terraform needs to read information about something that already exists. It lets the configuration query provider-managed attributes, such as an ID, without creating or taking ownership of that object.

In Terraform, a data block represents a data source: a read-only lookup of existing information from a provider. This is the correct choice when your configuration needs values such as an existing network ID, image ID, or DNS zone, but should not create or manage that object. A resource block is different because it tells Terraform to manage the lifecycle of infrastructure. A module groups reusable configuration, and an output exposes values after Terraform evaluates the configuration.

If the object already exists and Terraform only needs to reference it, use a data block.

  • resource confusion fails because resource blocks manage lifecycle, not read-only lookups.
  • module confusion fails because a module organizes reusable configuration rather than querying an existing object.
  • output confusion fails because outputs expose values from a configuration; they do not fetch external infrastructure data.

Question 9

Topic: Terraform Configuration

An engineer sets a resource argument like this:

name = upper(format("%s-%s", var.env, var.app))

The provider accepts any string for name, but the final value is not what the engineer expected. What should they review first?

Options:

  • A. The backend configuration rewriting name in state

  • B. The HCL expression and functions constructing name

  • C. terraform init evaluating and caching name

  • D. The provider behavior constructing name for the API

Best answer: B

Explanation: This is a value-construction issue, so the first place to look is the HCL expression itself. Terraform evaluates built-in functions and expressions to produce the argument value before sending it to the provider.

In Terraform, expressions and built-in functions are responsible for constructing values such as strings, lists, and maps. In this example, format() combines var.env and var.app, and upper() transforms the result. Terraform computes that final string during evaluation, and the provider then receives that already-built value.

A backend only controls where state is stored and how it is managed; it does not modify resource argument values. Likewise, terraform init prepares the working directory by installing providers, configuring the backend, and downloading modules, but it does not decide how name is assembled. The key takeaway is that when the problem is how a value is built, look at the HCL expression and functions first, not provider behavior.

  • Provider confusion fails because the stem says the provider accepts any string, so the bad result points to Terraform’s value construction.
  • Backend confusion fails because a backend manages state storage and locking, not argument transformation.
  • Init confusion fails because terraform init prepares the workspace; it does not build resource argument values.

Question 10

Topic: Terraform Configuration

A teammate runs terraform plan from the same directory shown below. Assume no other variable files or workspace variables exist. What value will Terraform assign to var.region?

# variables.tf
variable "region" {
  type    = string
  default = "us-west-1"
}

# terraform.tfvars
region = "us-east-1"
export TF_VAR_region=eu-central-1
terraform plan -var="region=ap-south-1"

Options:

  • A. ap-south-1

  • B. eu-central-1

  • C. us-east-1

  • D. us-west-1

Best answer: A

Explanation: Terraform uses the highest-precedence value when the same input variable is set in multiple places. Here, the explicit -var setting overrides the value in terraform.tfvars, the TF_VAR_region environment variable, and the default in the variable block.

Terraform input variables can be supplied from several common sources, and conflicts are resolved by precedence. In this scenario, region is defined four ways: a default in the variable block, an environment variable, a terraform.tfvars entry, and a command-line -var argument. The default is only a fallback. TF_VAR_region overrides the default, terraform.tfvars overrides the environment variable, and the explicit -var value overrides them all.

  • Lowest shown precedence: default value
  • Then: TF_VAR_region
  • Then: terraform.tfvars
  • Highest shown precedence: -var

A common mistake is assuming the automatically loaded terraform.tfvars value wins, but explicit CLI input has higher priority.

  • Default fallback is tempting, but defaults apply only when no higher-precedence source provides a value.
  • terraform.tfvars is loaded automatically, but an explicit -var argument overrides it.
  • TF_VAR_ environment input can set variables non-interactively, but it does not beat terraform.tfvars or -var here.

Continue with full practice

Use the Terraform Associate (004) Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try Terraform Associate (004) on Web View Terraform Associate (004) Practice Test

Free review resource

Read the Terraform Associate (004) Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026