DevOps Terraform Modules Workspaces

Beginner Terraform writes resources directly in a single file. Production-grade Terraform organizes code into reusable modules, manages multiple environments cleanly with workspaces, handles sensitive state securely, and integrates into CI/CD pipelines with policy-as-code checks. This topic covers these advanced practices that separate a reliable IaC system from a collection of scattered configuration files.

Building Reusable Terraform Modules

A Terraform module is a self-contained package of resources that can be called from other configurations with different input values. Modules prevent copy-paste infrastructure code and enforce consistent patterns across an organization.

Module Structure

modules/
└── aws-rds-postgres/
    ├── main.tf          # Resource definitions
    ├── variables.tf     # Input variables
    ├── outputs.tf       # Output values
    └── README.md        # Usage documentation

modules/aws-rds-postgres/variables.tf

variable "identifier" {
  description = "Unique identifier for the RDS instance"
  type        = string
}

variable "instance_class" {
  description = "RDS instance type"
  type        = string
  default     = "db.t3.micro"
}

variable "allocated_storage" {
  description = "Storage in GB"
  type        = number
  default     = 20
}

variable "database_name" {
  description = "Name of the initial database to create"
  type        = string
}

variable "username" {
  description = "Master database username"
  type        = string
}

variable "password" {
  description = "Master database password"
  type        = string
  sensitive   = true
}

variable "subnet_ids" {
  description = "List of subnet IDs for the DB subnet group"
  type        = list(string)
}

variable "vpc_security_group_ids" {
  description = "Security group IDs for the RDS instance"
  type        = list(string)
}

variable "environment" {
  description = "Environment name for tagging"
  type        = string
}

variable "backup_retention_days" {
  description = "Days to retain automated backups"
  type        = number
  default     = 7
}

modules/aws-rds-postgres/main.tf

resource "aws_db_subnet_group" "this" {
  name       = "${var.identifier}-subnet-group"
  subnet_ids = var.subnet_ids

  tags = {
    Name        = "${var.identifier}-subnet-group"
    Environment = var.environment
  }
}

resource "aws_db_instance" "this" {
  identifier        = var.identifier
  engine            = "postgres"
  engine_version    = "15.4"
  instance_class    = var.instance_class
  allocated_storage = var.allocated_storage
  storage_encrypted = true

  db_name  = var.database_name
  username = var.username
  password = var.password

  db_subnet_group_name   = aws_db_subnet_group.this.name
  vpc_security_group_ids = var.vpc_security_group_ids

  backup_retention_period = var.backup_retention_days
  deletion_protection     = var.environment == "production" ? true : false
  skip_final_snapshot     = var.environment == "production" ? false : true

  tags = {
    Name        = var.identifier
    Environment = var.environment
  }
}

modules/aws-rds-postgres/outputs.tf

output "endpoint" {
  description = "Connection endpoint for the RDS instance"
  value       = aws_db_instance.this.endpoint
}

output "port" {
  description = "Database port"
  value       = aws_db_instance.this.port
}

output "instance_id" {
  description = "RDS instance identifier"
  value       = aws_db_instance.this.id
}

Calling the Module for Two Environments

# environments/staging/main.tf
module "app_database" {
  source = "../../modules/aws-rds-postgres"

  identifier    = "webapp-staging-db"
  instance_class = "db.t3.micro"
  database_name = "appdb"
  username      = "appuser"
  password      = var.db_password
  environment   = "staging"

  subnet_ids             = module.vpc.private_subnet_ids
  vpc_security_group_ids = [aws_security_group.db_sg.id]
  backup_retention_days  = 1
}

# environments/production/main.tf
module "app_database" {
  source = "../../modules/aws-rds-postgres"

  identifier     = "webapp-prod-db"
  instance_class = "db.r6g.large"
  database_name  = "appdb"
  username       = "appuser"
  password       = var.db_password
  environment    = "production"

  subnet_ids             = module.vpc.private_subnet_ids
  vpc_security_group_ids = [aws_security_group.db_sg.id]
  backup_retention_days  = 30
}

Terraform Workspaces

Workspaces allow a single Terraform configuration to manage multiple separate state files — useful for creating lightweight environment isolation without duplicating the entire directory structure.

# Create and switch to a new workspace
terraform workspace new staging
terraform workspace new production

# List workspaces
terraform workspace list
# * default
#   staging
#   production

# Switch to staging
terraform workspace select staging

# Use workspace name inside configuration
resource "aws_instance" "app" {
  instance_type = terraform.workspace == "production" ? "t3.medium" : "t3.micro"

  tags = {
    Name        = "webapp-${terraform.workspace}"
    Environment = terraform.workspace
  }
}

Note: For complex production setups, separate environment directories (with their own state backends) are generally more maintainable than workspaces. Workspaces work well for simpler use cases.

Advanced State Management

Importing Existing Resources

When bringing manually created infrastructure under Terraform control, use terraform import:

# Import an existing EC2 instance
terraform import aws_instance.web_server i-0a1b2c3d4e5f67890

# Import an existing S3 bucket
terraform import aws_s3_bucket.assets my-company-assets-bucket

After importing, run terraform plan to see the drift between the imported resource and the Terraform configuration. Update the configuration to match.

Terraform State Commands

# List all resources in state
terraform state list

# Show details of a specific resource
terraform state show aws_instance.web_server

# Remove a resource from state (without destroying it)
terraform state rm aws_instance.old_server

# Move a resource to a new address (after refactoring)
terraform state mv aws_instance.web aws_instance.web_server

# Pull remote state to local file for inspection
terraform state pull > current-state.json

State Locking

Remote state backends use locking to prevent two engineers from running apply simultaneously. If a lock gets stuck (network failure during apply), release it manually:

# Force-unlock a stuck state lock
terraform force-unlock LOCK_ID

Terraform Policy as Code with Sentinel and OPA

Policy as Code enforces governance rules on Terraform plans before they apply. Examples: prevent public S3 buckets, enforce encryption on all RDS instances, require specific tags on all resources, block instance types above a cost threshold.

Checkov – Open Source IaC Policy Scanner

# Scan Terraform code for policy violations
checkov -d ./terraform/

# Example output:
# PASSED checks: 45
# FAILED checks: 3
#
# Check: CKV_AWS_20: "Ensure S3 bucket has ACL defined to not be public"
# FAILED for resource: aws_s3_bucket.logs
# File: /terraform/storage.tf:15
#
# Check: CKV_AWS_23: "Ensure RDS instances have Multi-AZ support enabled"
# FAILED for resource: aws_db_instance.main
# File: /terraform/database.tf:8

Custom OPA Policy Example

# policy/require-tags.rego
package terraform.analysis

deny[msg] {
  resource := input.resource_changes[_]
  resource.type == "aws_instance"
  not resource.change.after.tags.Environment
  msg := sprintf("EC2 instance '%v' is missing the required 'Environment' tag", [resource.address])
}

deny[msg] {
  resource := input.resource_changes[_]
  resource.type == "aws_instance"
  not resource.change.after.tags.Owner
  msg := sprintf("EC2 instance '%v' is missing the required 'Owner' tag", [resource.address])
}

Terraform CI/CD Pipeline with Atlantis

Atlantis is an open-source tool that automates terraform plan and terraform apply through pull request comments. Every PR that changes Terraform code automatically gets a plan posted as a comment for review.

Atlantis Workflow

  1. Engineer opens a PR modifying terraform/production/main.tf.
  2. Atlantis detects the change, runs terraform plan, and posts the output as a PR comment.
  3. Team reviews the plan — checking exactly what resources will change.
  4. An authorized reviewer comments atlantis apply on the PR.
  5. Atlantis runs terraform apply and posts the result.
  6. The PR merges automatically after a successful apply.
# atlantis.yaml in repository root
version: 3
projects:
  - name: production-webapp
    dir: terraform/environments/production
    workspace: default
    autoplan:
      when_modified: ["*.tf", "../modules/**/*.tf"]
      enabled: true
    apply_requirements: [approved, mergeable]

Managing Multiple Environments at Scale

Recommended Directory Structure for Large Teams

infrastructure/
├── modules/
│   ├── vpc/
│   ├── eks-cluster/
│   ├── rds-postgres/
│   └── s3-bucket/
├── environments/
│   ├── dev/
│   │   ├── backend.tf      # dev state backend config
│   │   ├── main.tf
│   │   └── terraform.tfvars
│   ├── staging/
│   │   ├── backend.tf
│   │   ├── main.tf
│   │   └── terraform.tfvars
│   └── production/
│       ├── backend.tf
│       ├── main.tf
│       └── terraform.tfvars
└── global/
    ├── iam/
    ├── dns/
    └── monitoring/

Summary

  • Terraform modules encapsulate reusable infrastructure patterns — build once, use across environments with different inputs.
  • Sensitive variables are marked with sensitive = true to prevent them from appearing in logs and plans.
  • State management commands let teams import existing resources, refactor configurations, and resolve stuck locks.
  • Policy as code with Checkov, Sentinel, or OPA enforces governance rules before infrastructure changes apply.
  • Atlantis automates the plan-review-apply workflow through pull requests — bringing the same peer review rigor to infrastructure that application code receives.
  • Separate environment directories with independent state backends provide the cleanest isolation for production-grade Terraform at scale.

Leave a Comment