beginnerTA-0033-4 weeks prep8 min read

Terraform Associate (003) — Study Guide

Study guide for the HashiCorp Certified: Terraform Associate (003) exam. Covers HCL syntax, state management, modules, workspace, providers, and Terraform Cloud — for anyone working with IaC.

terraformta-003hashicorpiacbeginnerinfrastructure-as-code

Domains

10

Key concepts

12

Study time

3-4 weeks

Exam Overview

DetailInfo
Exam codeTA-003
Duration60 minutes
Questions57 (multiple choice, true/false, fill-in)
Passing score70%
Cost$70.50 USD
Validity2 years

Domain Weightings

DomainWeight
Understand Infrastructure as Code7%
Understand Terraform's purpose vs other IaC7%
Understand Terraform basics37%
Use Terraform outside of core workflow18%
Interact with Terraform modules12%
Navigate Terraform workflow13%
Implement and maintain state6%

Domain 1–2: IaC and Terraform Basics

Why Terraform over other IaC tools?

ToolApproachCloud
TerraformDeclarative, multi-cloud, HCLAny (AWS, Azure, GCP, K8s)
CloudFormationDeclarative, JSON/YAMLAWS only
ARM/BicepDeclarative, JSON/BicepAzure only
AnsibleProcedural (also config mgmt)Any, but imperative
PulumiDeclarative, real languages (Python/TS)Any

Terraform advantages: provider ecosystem (3,000+ providers), multi-cloud from one tool, state management, plan preview.


Domain 3: Terraform Basics (37%)

HCL syntax

# Providers
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
  required_version = ">= 1.6"
}

provider "aws" {
  region = var.aws_region
}

# Resources
resource "aws_instance" "web" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = var.instance_type

  tags = {
    Name        = "web-server"
    Environment = var.environment
  }
}

# Data sources — read existing resources
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}

# Variables
variable "instance_type" {
  type        = string
  default     = "t3.micro"
  description = "EC2 instance type"
}

variable "environment" {
  type    = string
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

# Outputs
output "instance_public_ip" {
  value       = aws_instance.web.public_ip
  description = "Public IP of the web server"
}

# Local values
locals {
  common_tags = {
    Project     = "MyApp"
    ManagedBy   = "Terraform"
    Environment = var.environment
  }
}

Variable types

TypeExample
string"t3.micro"
number3
booltrue
list(string)["us-east-1a", "us-east-1b"]
map(string){Name = "web", Env = "prod"}
object({...})Complex structured type
set(string)Unordered unique strings

Variable precedence (highest to lowest)

  1. -var and -var-file command line flags
  2. *.auto.tfvars files
  3. terraform.tfvars
  4. Environment variables (TF_VAR_name)
  5. Default values in variable blocks

Meta-arguments

# count — create multiple similar resources
resource "aws_instance" "server" {
  count         = 3
  instance_type = "t3.micro"
  ami           = "ami-0c94855ba95c71c99"
  tags = {
    Name = "server-${count.index}"
  }
}

# for_each — create resources from a map or set
resource "aws_s3_bucket" "buckets" {
  for_each = toset(["logs", "data", "backups"])
  bucket   = "${var.prefix}-${each.key}"
}

# depends_on — explicit dependency
resource "aws_instance" "app" {
  depends_on = [aws_db_instance.database]
}

# lifecycle
resource "aws_instance" "web" {
  lifecycle {
    create_before_destroy = true
    prevent_destroy       = true   # error if terraform destroy is run
    ignore_changes        = [tags]
  }
}

Expressions and functions

# String interpolation
name = "server-${var.environment}"

# Conditional
instance_type = var.environment == "prod" ? "m5.large" : "t3.micro"

# For expression
subnet_ids = [for s in aws_subnet.public : s.id]

# Built-in functions
length(var.subnets)          # count items
upper("hello")               # "HELLO"
file("scripts/init.sh")      # read file contents
base64encode("hello")
jsonencode({key = "value"})
toset(["a", "b", "a"])       # ["a", "b"]

Domain 4: Outside Core Workflow (18%)

Provisioners (use sparingly)

resource "aws_instance" "web" {
  # ...
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "sudo apt-get install -y nginx",
    ]
    connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = file("~/.ssh/id_rsa")
      host        = self.public_ip
    }
  }
}

Warning

Prefer cloud-init / user_data over provisioners. Provisioners run only at creation time, have no destroy equivalent, and add complexity. Last resort only.

Import existing resources

# Import an existing AWS VPC
terraform import aws_vpc.main vpc-0a1b2c3d4e5f

# From Terraform 1.5+: generate import block
import {
  to = aws_vpc.main
  id = "vpc-0a1b2c3d4e5f"
}
# Then: terraform plan -generate-config-out=generated.tf

Terraform Cloud / Terraform Enterprise

  • Remote state — store state in HCP Terraform (formerly Terraform Cloud) instead of local.
  • Remote execution — plans and applies run in HCP Terraform, not your laptop.
  • Variable sets — reusable variable sets across workspaces.
  • Sentinel — policy-as-code to enforce compliance before apply.
  • Run triggers — trigger downstream workspace runs when upstream finishes.

Domain 5: Modules (12%)

Module structure

modules/
  vpc/
    main.tf
    variables.tf
    outputs.tf
    README.md

# Root module calls it:
module "vpc" {
  source  = "./modules/vpc"
  
  # Or from registry:
  # source  = "terraform-aws-modules/vpc/aws"
  # version = "5.0.0"
  
  vpc_cidr    = "10.0.0.0/16"
  environment = var.environment
}

# Access module outputs
resource "aws_instance" "app" {
  subnet_id = module.vpc.private_subnet_ids[0]
}

Public registry modules

# AWS VPC module from Terraform Registry
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
}

Domain 6: Core Workflow (13%)

Essential commands

terraform init       # Download providers and modules; initialise backend
terraform validate   # Check syntax and logical errors
terraform fmt        # Format HCL to canonical style
terraform plan       # Preview changes (+/-/~); creates execution plan
terraform apply      # Apply the plan (prompts for confirmation)
terraform apply -auto-approve   # Skip confirmation (CI/CD)
terraform destroy    # Destroy all managed resources
terraform output     # Show output values
terraform show       # Show current state or plan
terraform state list # List all resources in state
terraform state show aws_instance.web  # Inspect a specific resource
terraform taint aws_instance.web  # Mark resource for recreation (deprecated → use -replace)
terraform apply -replace=aws_instance.web

Refresh and state commands

terraform refresh    # Update state to match real infrastructure (deprecated → use plan -refresh-only)
terraform state mv   # Rename/move resource in state
terraform state rm   # Remove resource from state (doesn't destroy it)
terraform state pull # Get current state file

Domain 7: State (6%)

Why state matters

Terraform state maps configuration to real-world resources. Without it, Terraform can't determine what already exists.

  • State is stored in terraform.tfstate (local) or a remote backend.
  • Never edit state manually — use terraform state commands.
  • State contains sensitive data (resource IDs, sometimes passwords) — protect it.

Remote backends

# S3 backend (common for AWS teams)
terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "prod/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"  # prevent concurrent applies
  }
}

# Azure storage backend
terraform {
  backend "azurerm" {
    resource_group_name  = "rg-terraform-state"
    storage_account_name = "tfstate12345"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
  }
}

Workspaces

terraform workspace new dev
terraform workspace new prod
terraform workspace select prod
terraform workspace list

# Use workspace name in config
resource "aws_instance" "web" {
  instance_type = terraform.workspace == "prod" ? "m5.large" : "t3.micro"
}

Workspaces vs directories

Workspaces share the same configuration but separate state. For significantly different environments (different accounts, different VPCs), use separate directories/repos rather than workspaces.


Study Plan (3–4 Weeks)

WeekFocus
1Core HCL — resources, variables, outputs, data sources. Run local examples
2State, backends, workspaces, import. Meta-arguments (count, for_each, lifecycle)
3Modules — create one, use public registry modules. Terraform Cloud basics
4Practice exams + fill gaps. Write Terraform for an actual AWS/Azure deployment

Key Resources

ResourceNotes
HashiCorp LearnFree official tutorials at developer.hashicorp.com/terraform
Zeal Vora (Udemy)Popular TA-003 specific course
Bryan Krausen (Udemy)Exam-focused course with practice tests
Terraform RegistryBrowse providers and modules at registry.terraform.io
Andrew Brown (freeCodeCamp)Free 7-hour Terraform course on YouTube

Common Exam Traps

  • terraform init must be run first — before any other command; required after adding new providers.
  • Count vs for_each — use for_each for maps/sets (stable addressing). Use count only for truly identical resources.
  • depends_on for non-obvious dependencies — Terraform automatically infers dependencies from references. Use depends_on only when there's an implicit dependency it can't detect.
  • Provisioners are a last resort — exam tests that you know they're unreliable and should be avoided.
  • terraform.tfvars is NOT encrypted — never put secrets there. Use env vars (TF_VAR_name) or a secrets manager.
  • Workspaces ≠ separate environments — they share the same backend but have separate state files. Not a full environment isolation strategy.