GCP Infrastructure as Code with Terraform

Terraform is an open-source Infrastructure as Code (IaC) tool created by HashiCorp. It allows defining GCP infrastructure — VMs, networks, databases, IAM policies — in human-readable configuration files, and then creating, updating, or destroying that infrastructure by running commands. Instead of clicking through the Cloud Console or running individual gcloud commands, the entire infrastructure is written as code, stored in version control, and applied repeatably.

Think of Terraform like a blueprint for a building. An architect (developer) writes the blueprint (Terraform config). The construction team (Terraform) reads the blueprint and builds exactly what it describes. If the blueprint changes, the team updates only what changed — without tearing everything down. Multiple identical buildings (environments) can be built from the same blueprint.

Why Infrastructure as Code?

Manual Console/CLITerraform (IaC)
Hard to reproduce — steps forgottenFully reproducible — run the same code any number of times
No version historyInfrastructure changes tracked in Git
Drift — console changes diverge from documentationTerraform detects and corrects drift automatically
Staging ≠ Production (someone changed something manually)Staging = Production (same code, different variable values)
Destroying all resources is tediousterraform destroy removes everything cleanly

Terraform Core Concepts

Provider

A provider is the plugin that connects Terraform to a cloud platform's API. The GCP provider (hashicorp/google) knows how to create and manage all GCP resources.

Resource

A resource is a single infrastructure component — a VM, a bucket, a firewall rule. Terraform creates and manages resources declared in .tf files.

State

Terraform maintains a state file (terraform.tfstate) that records what infrastructure it has created. It compares the current state with the desired configuration to determine what changes to make.

Plan → Apply Workflow

Write .tf config files
        │
        ▼
terraform init    → Download provider plugins
        │
        ▼
terraform plan    → Show what WILL change (dry run — no actual changes)
        │
        ▼
terraform apply   → Actually create/update/delete resources
        │
        ▼
terraform destroy → Remove all managed resources (cleanup)

Project Structure

my-gcp-infra/
├── main.tf          ← Main resource definitions
├── variables.tf     ← Input variable declarations
├── outputs.tf       ← Output value definitions
├── terraform.tfvars ← Variable values (not committed to Git)
└── backend.tf       ← Remote state configuration (Cloud Storage)

Writing Terraform Configuration

provider.tf – Configure GCP Provider

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
  required_version = ">= 1.5"
}

provider "google" {
  project = var.project_id
  region  = var.region
}

variables.tf – Define Input Variables

variable "project_id" {
  description = "GCP project ID"
  type        = string
}

variable "region" {
  description = "GCP region"
  type        = string
  default     = "us-central1"
}

variable "zone" {
  description = "GCP zone"
  type        = string
  default     = "us-central1-a"
}

variable "environment" {
  description = "Deployment environment (dev, staging, prod)"
  type        = string
  default     = "dev"
}

terraform.tfvars – Variable Values

project_id  = "my-gcp-project-123"
region      = "asia-south1"
zone        = "asia-south1-a"
environment = "production"

main.tf – Create GCP Resources

# VPC Network
resource "google_compute_network" "app_vpc" {
  name                    = "${var.environment}-app-vpc"
  auto_create_subnetworks = false
}

# Subnet
resource "google_compute_subnetwork" "app_subnet" {
  name          = "${var.environment}-app-subnet"
  ip_cidr_range = "10.0.1.0/24"
  region        = var.region
  network       = google_compute_network.app_vpc.id
}

# Firewall Rule — Allow HTTP/HTTPS
resource "google_compute_firewall" "allow_web" {
  name    = "${var.environment}-allow-web"
  network = google_compute_network.app_vpc.name

  allow {
    protocol = "tcp"
    ports    = ["80", "443"]
  }

  source_ranges = ["0.0.0.0/0"]
  target_tags   = ["web-server"]
}

# Compute Engine VM
resource "google_compute_instance" "web_server" {
  name         = "${var.environment}-web-server"
  machine_type = "e2-medium"
  zone         = var.zone

  tags = ["web-server"]

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
      size  = 20  # GB
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.app_subnet.id
    access_config {}  # Assigns external IP
  }

  metadata_startup_script = <<-EOF
    #!/bin/bash
    apt-get update
    apt-get install -y apache2
    systemctl start apache2
  EOF
}

# Cloud Storage Bucket
resource "google_storage_bucket" "app_bucket" {
  name          = "${var.project_id}-${var.environment}-assets"
  location      = var.region
  force_destroy = true

  uniform_bucket_level_access = true

  lifecycle_rule {
    condition { age = 90 }
    action    { type = "SetStorageClass"; storage_class = "NEARLINE" }
  }
}

# Cloud SQL Instance
resource "google_sql_database_instance" "app_db" {
  name             = "${var.environment}-app-db"
  database_version = "MYSQL_8_0"
  region           = var.region

  settings {
    tier = "db-f1-micro"

    ip_configuration {
      ipv4_enabled    = false
      private_network = google_compute_network.app_vpc.id
    }

    backup_configuration {
      enabled = true
    }
  }

  deletion_protection = false  # Set to true in production
}

outputs.tf – Export Values

output "vm_external_ip" {
  value       = google_compute_instance.web_server.network_interface[0].access_config[0].nat_ip
  description = "External IP of the web server"
}

output "db_connection_name" {
  value       = google_sql_database_instance.app_db.connection_name
  description = "Cloud SQL connection name for the auth proxy"
}

output "bucket_url" {
  value       = google_storage_bucket.app_bucket.url
  description = "Cloud Storage bucket URL"
}

Running Terraform

# Initialize — download providers
terraform init

# Preview changes — no resources created yet
terraform plan

# Apply changes — creates all resources
terraform apply

# After apply, outputs are displayed:
# vm_external_ip = "34.68.100.25"
# db_connection_name = "my-project:asia-south1:production-app-db"
# bucket_url = "gs://my-gcp-project-123-production-assets"

# Destroy all resources (cleanup dev environment)
terraform destroy

Remote State with Cloud Storage

By default, Terraform stores state locally in terraform.tfstate. For teams, the state must be stored remotely so all team members share the same state.

# backend.tf — store state in Cloud Storage
terraform {
  backend "gcs" {
    bucket = "my-project-terraform-state"
    prefix = "infra/production"
  }
}
# Create the state bucket first (manually, once)
gsutil mb -l us-central1 gs://my-project-terraform-state
gsutil versioning set on gs://my-project-terraform-state

# Then initialize Terraform with the remote backend
terraform init

Key Takeaways

  • Terraform defines GCP infrastructure as code in .tf files using HashiCorp Configuration Language (HCL).
  • The workflow is: terraform initterraform planterraform apply.
  • Variables separate environment-specific values (project ID, region) from resource definitions.
  • Terraform state tracks what has been created — store it remotely in Cloud Storage for team collaboration.
  • Resources reference each other using resource identifiers (google_compute_network.app_vpc.id).
  • The same Terraform code creates dev, staging, and production environments by changing variable values.

Leave a Comment