Using Terraform (HCL) to design, deploy, and manage repeatable multi-region AWS infrastructure.

Automating a setup that takes an hour to complete manually, because repetition is a problem best solved once.

I’ve been experimenting with Terraform for some time now. Once you understand what HCL can and cannot do, it becomes a genuinely enjoyable and worthwhile exercise.

For this project, I chose to store the Terraform state remotely in an S3 bucket, using S3 based locking as well. This aligns with Terraform best practices and ensures safe state management.

Before getting started, there are a few prerequisites:

  • Update the list of regions in variables.tf to add or remove deployment targets.
    • The first entry is treated as the primary (master) region, from which files are replicated.
  • Set the domain name and domain slug.
    • The domain must already exist as a hosted zone in Route 53.
  • Provide your AWS account_id.
  • Create and configure your own backend.tf.
    • In my case, I use an S3 bucket with state locking enabled, as recommended for Terraform state management.
variable "buckets" {
  type = list(object({
    region = string
    is_main = bool
    priority = optional(number)
  }))
  default = [
    {
      region = "ap-northeast-1"
      is_main   = "true"
      priority = null
    }, 
    {
      region = "eu-north-1"
      is_main  = "false"
      priority = 0
    },
     { 
      region = "eu-west-1"
      is_main   = "false"
      priority = 1
    },
    {
      region = "us-west-2" 
      is_main   = "false"
      priority = 2
    },
    {
      region = "us-east-1" 
      is_main   = "false"
      priority = 3
    },
    {
      region = "ap-south-1"
      is_main   = "false"
      priority = 4
    }    
  ]
}

variable "environment_domain" {
  type    = string
  default = "makebuilds.com"
}

variable "environment_domain_slug" {
  type    = string
  default = "makebuilds-com"
}

variable "account_id" {
  type    = string
  default = "111111111111"
}

As you may have noticed, the regions defined in variables.tf differ from those used in this website’s current hosting setup. The intent was to experiment with AWS directory buckets in pursuit of even lower latency. Unfortunately, from what I’ve been able to determine, directory buckets are not compatible with multi region access points and are not available in every AWS region. As a result, classic S3 buckets will have to do.

At this point, I’ll let the HCL do the talking:

locals {
  all_buckets = { for bucket in var.buckets : bucket.region => bucket }
}

resource "aws_s3_bucket" "s3_buckets" {
  for_each = local.all_buckets
  bucket = "${var.environment_domain_slug}-${each.value.region}"
  region = each.value.region

}

# Enable versioning for the buckets 
resource "aws_s3_bucket_versioning" "s3_versioning" {
    for_each = local.all_buckets
    depends_on = [aws_s3_bucket.s3_buckets]
    bucket = "${var.environment_domain_slug}-${each.value.region}"
    region = each.value.region 
  versioning_configuration {
    status = "Enabled" # Valid values: "Enabled", "Suspended"
  }
}

# IAM Role for Replication
resource "aws_iam_role" "replication_role" {
  name = "${var.environment_domain_slug}-s3-replication-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect    = "Allow"
      Principal = { Service = "s3.amazonaws.com" }
      Action    = "sts:AssumeRole"
    }]
  })
}

# IAM Policy for Replication
resource "aws_iam_policy" "replication_policy" { 
  for_each = { for s3_buckets in var.buckets : s3_buckets.region => s3_buckets if !s3_buckets.is_main }
  name        = "${var.environment_domain_slug}-${each.value.region}-s3-replication-policy"
  description = "Allows S3 replication between main and secondary buckets"
    depends_on = [aws_iam_role.replication_role]
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = ["s3:*"]
        Resource = "*"
      }
    ]
  })
}

# Attach Policy to Role
resource "aws_iam_role_policy_attachment" "replication_policy_attach" {
  for_each = { for s3_buckets in var.buckets : s3_buckets.region => s3_buckets if !s3_buckets.is_main }
  depends_on = [aws_s3_bucket_versioning.s3_versioning]
  role       = "${var.environment_domain_slug}-s3-replication-role"
  policy_arn = "arn:aws:iam::${var.account_id}:policy/${var.environment_domain_slug}-${each.value.region}-s3-replication-policy"
 
}

# S3 Replication Configuration
resource "aws_s3_bucket_replication_configuration" "replication" {
  bucket   = "${var.environment_domain_slug}-${lookup(var.buckets[0], "region")}"
  role     = "arn:aws:iam::${var.account_id}:role/${var.environment_domain_slug}-s3-replication-role"
  region = "${lookup(var.buckets[0], "region")}"
  depends_on = [aws_s3_bucket_versioning.s3_versioning]
dynamic "rule" {
  for_each = { for s3_buckets in var.buckets : s3_buckets.region => s3_buckets if !s3_buckets.is_main}
  content {
  id     = "${var.environment_domain_slug}-${rule.value.region}"
  status = "Enabled"
  priority = "${rule.value.priority}"

    filter {
      prefix = "" # Replicate all objects
    }

    destination {
      bucket        = "arn:aws:s3:::${var.environment_domain_slug}-${rule.value.region}"
      storage_class = "STANDARD"
      replication_time {
        status = "Enabled"
        time {
          minutes = 15
        }
      }
      metrics {
  event_threshold {
    minutes = 15
  }
  status = "Enabled"
}
    }
    delete_marker_replication {
      status = "Enabled"
    }
  }
 }
}

# S3 SSE-S3 encryption 

resource "aws_s3_bucket_server_side_encryption_configuration" "encrypt_all_buckets" {
  for_each = local.all_buckets
  bucket = "${var.environment_domain_slug}-${each.value.region}"
  region = each.value.region
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256" # This specifies SSE-S3
    }
    bucket_key_enabled = true
  }
}


# Create the Multi-Region Access Point (MRAP)
resource "aws_s3control_multi_region_access_point" "mrap" {
  # MRAPs are global resources and do not require a specific region provider
  details {
  name = "${var.environment_domain_slug}-mrap"
    
  # Configure the regions associated with the MRAP
  dynamic "region" {
    for_each = local.all_buckets
    content {
    bucket = "${var.environment_domain_slug}-${region.value.region}"
    }
  }
 }
}

resource "aws_s3control_multi_region_access_point_policy" "mrap" {
  details {
    name = "${var.environment_domain_slug}-mrap"
    policy = jsonencode({
      "Version" : "2012-10-17",
      "Statement" : [
        {
          "Sid" : "AllowMultiRegionAccessPointRead",
          "Effect" : "Allow",
      
          "Principal" : {
            "AWS" : "${var.account_id}"
            "Service": ["cloudfront.amazonaws.com"]
          },
          "Action" : ["s3:GetObject", "S3:ListBucket"],
          "Resource" : ["arn:aws:s3::${var.account_id}:accesspoint/${aws_s3control_multi_region_access_point.mrap.alias}/object/*",
                       "arn:aws:s3::${var.account_id}:accesspoint/${aws_s3control_multi_region_access_point.mrap.alias}"],
        }
      ],
    })
  }
}

# SSL Certificate and validation

resource "aws_acm_certificate" "cert" {
  region            = "us-east-1"
  domain_name       = "*.${var.environment_domain}"
  validation_method = "DNS"

  lifecycle {
    create_before_destroy = true
  }
}
data "aws_route53_zone" "dns_zone" {
  name         = "${var.environment_domain}"
  private_zone = false
}

resource "aws_route53_record" "dns_records" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = data.aws_route53_zone.dns_zone.zone_id
}
resource "aws_acm_certificate_validation" "validated_cert" {
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [for record in aws_route53_record.dns_records : record.fqdn]
}

# Cloudfront


resource "aws_cloudfront_distribution" "mrap_distribution" {
  origin {
    custom_origin_config {
        http_port              = "80"
        https_port             = "443"
        origin_protocol_policy = "https-only"
        origin_read_timeout    = "30"
        origin_ssl_protocols   = ["TLSv1.2", "TLSv1.1", "TLSv1"]
      }
    domain_name              = aws_s3control_multi_region_access_point.mrap.domain_name
    origin_id                = "mrap_origin_id"
  }

  enabled             = true
  is_ipv6_enabled     = true
  default_root_object = "index.html"

  aliases = ["www.${var.environment_domain}"]

  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "mrap_origin_id"
    lambda_function_association {
      event_type   = "origin-request"
      lambda_arn   = "${aws_lambda_function.edge_lambda.arn}:${aws_lambda_function.edge_lambda.version}"
    }
    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "mrap_origin_id"
    lambda_function_association {
      event_type   = "origin-request"
      lambda_arn   = "${aws_lambda_function.edge_lambda.arn}:${aws_lambda_function.edge_lambda.version}"
    }
    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  price_class = "PriceClass_200"

  restrictions {
    geo_restriction {
      restriction_type = "none"
      locations        = []
    }
  }

  tags = {
    Environment = "production"
  }

  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate.cert.arn
    ssl_support_method  = "sni-only"
  }
}

# Create Route53 records for the CloudFront distribution aliases
data "aws_route53_zone" "site_dns_zone" {
  name = var.environment_domain
}

resource "aws_route53_record" "cloudfront" {
  for_each = aws_cloudfront_distribution.mrap_distribution.aliases
  zone_id  = data.aws_route53_zone.site_dns_zone.zone_id
  name     = each.value
  type     = "A"

  alias {
    name                   = aws_cloudfront_distribution.mrap_distribution.domain_name
    zone_id                = aws_cloudfront_distribution.mrap_distribution.hosted_zone_id
    evaluate_target_health = false
  }
}

# Lambda @Edge settings


resource "aws_lambda_function" "edge_lambda" {
  provider = aws # deploy to us-east-1
  region = "us-east-1"
  function_name = "${var.environment_domain_slug}-lambda"

  filename = "./mrap-to-cdn-proxy-us-east-1.zip"
  #source_code_hash = data.archive_file.lambda_zip.output_base64sha256

  handler = "index.handler"
  runtime = "nodejs24.x"
  role    = aws_iam_role.lambda_exec.arn
  publish = true # Must publish
}
resource "aws_iam_role" "lambda_exec" {
  name = "${var.environment_domain_slug}-lambda"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      },
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "edgelambda.amazonaws.com"
        }
      }
    ]
  })
}
resource "aws_lambda_permission" "allow_cloudfront" {
  provider = aws
  statement_id  = "AllowExecutionFromCloudFront"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.edge_lambda.arn
  principal     = "edgelambda.amazonaws.com"
  source_arn    = "arn:aws:cloudfront::${var.account_id}:distribution/${aws_cloudfront_distribution.mrap_distribution.id}"
}
# IAM Policy for Lambda
resource "aws_iam_policy" "lambda_policy" { 
  name        = "${var.environment_domain_slug}-lambda"
  description = "Allows @ edge execution"

  policy = jsonencode({
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": [
				"iam:PassRole",
				"lambda:DisableReplication",
				"lambda:GetFunction",
				"cloudfront:ListDistributionsByLambdaFunction",
				"lambda:DeleteFunction"
			],
			"Resource": "*"
		},
		{
			"Action": [
				"logs:CreateLogGroup",
				"logs:CreateLogStream",
				"logs:PutLogEvents"
			],
			"Effect": "Allow",
			"Resource": "arn:aws:logs:*:*:*"
		},
		{
			"Action": [
				"s3:GetObject",
				"s3:ListBucket"
			],
			"Effect": "Allow",
			"Resource": [
				"*"
			]
		}
	]
})
}

# Attach Policy to lambda Role
resource "aws_iam_role_policy_attachment" "lambda_policy_attach" {
  role       = "${var.environment_domain_slug}-lambda"
  depends_on = [ aws_iam_role.lambda_exec ]
  policy_arn = "arn:aws:iam::${var.account_id}:policy/${var.environment_domain_slug}-lambda"
 
}

There are a few areas I’d still like to improve. Chief among them are the IAM permissions used for S3 replication, which are currently more permissive than I’m comfortable with. I’ll need to spend some time digging further into the documentation to tighten these down to the minimum required scope.

That said, the process itself was genuinely enjoyable. Figuring things out as I went reinforced just how powerful Terraform can be, especially once you realize that resource blocks can be dynamic and that for_each expressions can thus be nested. At that point, the configuration becomes significantly less verbose and far less repetitive.

Source on GitLab for the curious.