Automating S3 Bucket Deployment with Terraform 🚀

In this blog, we'll explore how to automate AWS S3 Bucket creation and upload files using Terraform in just a few steps.

Prerequisites

  1. Terraform Installed: Ensure Terraform is installed.

  2. AWS Credentials: Configure access with aws configure.

  3. Basic Knowledge of Terraform syntax.

Code Explanation

Here's a simple Terraform script to:

  1. Create an S3 bucket.

  2. Upload a file (local myfile.txt) to the bucket.

  3. Manage backend state with S3 remote backend.

  4. Using Random provider for S3 bucket name creation

Step 1: Terraform Setup

  1. Providers: Specify which cloud provider (AWS) and its version to use.

  2. Backend: Use S3 to store Terraform's state file remotely for better management.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"   # AWS Provider
      version = "5.54.1"          # Version to use
    }
  }
  backend "s3" {                 # Store Terraform state in S3
    bucket = "demo-bucket-amit12345"
    key    = "backend.tfstate"   # File name for state
    region = "eu-north-1"        # AWS region
  }

Step 2: AWS Provider Configuration

Define which AWS region to deploy the resources to.

provider "aws" {
  region = "eu-north-1"   # AWS Region
}

Step 3: Create S3 Bucket

Use aws_s3_bucket resource to create an S3 bucket.

resource "aws_s3_bucket" "demo-bucket" {
  bucket = "demo-bucket-amit12345"  # Unique bucket name
}

Step 4: Upload File to S3

Use aws_s3_object to upload a local file (myfile.txt) to the created S3 bucket.

resource "aws_s3_object" "bucket-data" {
  bucket = aws_s3_bucket.demo-bucket.bucket  # Reference the bucket
  source = "./myfile.txt"      # Path to your local file
  key    = "mydata.txt"        # Object name in S3
}

Step 5: Steps to Execute

  1. Save this code in a file called main.tf.

  2. Open your terminal and run:

    • Initialize Terraform:

        terraform init
      
    • Apply the Configuration:

        terraform apply -auto-approve
      
  3. Verify the S3 bucket and uploaded file (mydata.txt) in the AWS Management Console.

After apply go to your AWS-S3 and check

  1. you will see demo-bucket-amit12345

  1. You can see your backend.tfstate file as you have mentioned

  1. Click open to view

  1. Here’s your backend.tfstate

File Content

myfile.txt

Hello World from Amit's S3!!

Steps to Deploy

  1. Write the Code: Save the above script to main.tf.

  2. Initialize Terraform:

     terraform init
    
  3. Apply Configuration:

     terraform apply -auto-approve
    
  4. Verify: Check your S3 bucket in the AWS Management Console. File mydata.txt will be uploaded.

Key Concepts

  1. Backend: S3 stores Terraform's state file for remote management.

  2. S3 Bucket: A simple storage solution on AWS.

  3. S3 Object: Stores and manages files in the bucket.

Extra points for Backend to note

In Terraform, a backend is responsible for storing and managing the state file. The state file (terraform.tfstate) is essential because it keeps track of your infrastructure and its configuration.

Why Use a Backend?

  1. State Storage: Backends store the state file securely, ensuring infrastructure changes are consistent and reliable.

  2. Collaboration: For teams, remote backends allow multiple users to collaborate on the same infrastructure.

  3. Locking: Prevents conflicts by locking the state file when one person is making changes.

  4. Disaster Recovery: Remote storage of the state file ensures it is not lost even if your local system fails.

terraform {
  backend "s3" {
    bucket = "my-terraform-backend"  # S3 bucket name
    key    = "state/terraform.tfstate"  # Path to store the state file
    region = "us-east-1"  # AWS Region
  }
}

Explanation for backend:

  • bucket: Name of the S3 bucket where the state file will be stored.

  • key: Path within the bucket where the state file will be saved.

  • region: AWS region where the S3 bucket exists.

Benefits of Remote Backend (e.g., S3)

  • Centralized State: All Terraform operations use the same state file.

  • Team Collaboration: Multiple team members can use the same infrastructure code without conflict.

  • Reliability: If your local machine crashes, the state file is still safe in the cloud.

  • Versioning: Services like S3 support state file versioning, providing a history of changes.

Don’t forget to add random provider

Do some changes to the above code you can achieve the random name for your bucket, because everytime you need a unique bucket name for AWS S3 bucket. This approach will be better.

Before diving deep adjust the above code like this: summarized:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"   # AWS Provider
      version = "5.54.1"          # Version to use
    }
    random = {
        source = "hashicorp/random"
        version = "3.6.2"
    }
  }
  backend "s3" {                 # Store Terraform state in S3
    bucket = "demo-bucket-amit12345"
    key    = "backend.tfstate"   # File name for state
    region = "eu-north-1"        # AWS region
  }

}

resource "random_id" "rand_id" {
    byte_length = 8

}

provider "aws" {
  region = "eu-north-1"   # AWS Region
}

resource "aws_s3_bucket" "demo-bucket" {
  bucket = "demo-bucket-${random_id.rand_id.hex}"  # Unique bucket name
}

resource "aws_s3_object" "bucket-data" {
  bucket = aws_s3_bucket.demo-bucket.bucket  # Reference the bucket
  source = "./myfile.txt"      # Path to your local file
  key    = "mydata.txt"        # Object name in S3
}
resource "aws_s3_object" "bucket-data1" {
  bucket = aws_s3_bucket.demo-bucket.bucket  # Reference the bucket
  source = "./myfile.txt"      # Path to your local file
  key    = "mydata.txt"        # Object name in S3
}

output "name" {
    value = random_id.rand_id.id
}

Now check the aws-s3 you will get random name:

The random_id resource is used to generate a unique, random identifier for the infrastructure. Here’s the extracted random section and its explanation:

resource "random_id" "rand_id" {
  byte_length = 8
}

Explanation

  1. Resource "random_id":
    The random_id resource comes from the hashicorp/random provider. It is used to generate a random string or identifier, which can be utilized to ensure uniqueness in your infrastructure.

  2. byte_length = 8:
    The byte_length specifies the length of the random string in bytes. A byte length of 8 means the generated random string will be 8 bytes long, which is equivalent to a hexadecimal string of 16 characters.

Usage in Your Code

  • The random_id.rand_id is referenced in the S3 bucket name to make it unique:

      bucket = "demo-bucket-${random_id.rand_id.hex}"
    
    • ${random_id.rand_id.hex} converts the generated random bytes into a hexadecimal string.

    • This ensures the bucket name is unique (as S3 bucket names must be globally unique).

  • The output of this random ID is also provided as:

      output "name" {
        value = random_id.rand_id.id
      }
    
    • This outputs the generated random ID for easy reference.

Why Use a Random ID?

  • Avoid Naming Conflicts: Ensures that the bucket name (demo-bucket-<random-id>) is unique, avoiding errors when creating S3 buckets, as bucket names must be globally unique.

  • Dynamic Infrastructure: Helps in creating non-colliding names dynamically without manual intervention.

Summary

  1. Terraform Setup: Define providers and backend.

  2. Create an S3 Bucket: Store objects.

  3. Upload a File: Use aws_s3_object to upload myfile.txt.

  4. Run Commands: Use terraform init and terraform apply.

That’s it! You've successfully automated S3 bucket creation and file upload. 🎉

Â