• Rahul Agrawal

An Explanatory Guide to Terraform

Updated: Sep 19



This Blog explores HashiCorp Terraform, an IAC (Infrastructure as Code) tool that empowers DevOps and engineering teams in the world of cloud computing. HashiCorp Terraform, similar to AWS CloudFormation, can use code to define the desired state of your infrastructure and deploy those changes to his AWS account, Google Cloud projects, and other N solutions of his (details will be described later). Let's take a closer look at the main use cases, concepts related to IAC and Terraform with AWS.


What is IAC ?

The administration and deployment of infrastructure using code rather than human procedures is known as Infrastructure as Code (IaC).

IaC generates a configuration file that contains information about the infrastructure and makes it easier to change and distribute settings. Additionally, it guarantees that you always deploy to the same environment. IaC provides configuration management and helps prevent ad hoc, undocumented configuration modifications by coding and documenting configuration specifications. Configuration files should be subject to source control, just like any other software source code file, as version control is a crucial component of IaC.

You can also divide your infrastructure into modular components when you deploy it as code. Automation and this can be blended in a variety of ways.


Why We Require IAC ?

  • IaC is a crucial component in implementing continuous integration/continuous delivery (CI/CD) and DevOps processes. IaC relieves developers of the majority of provisioning effort so that they can just run a script to get their infrastructure ready.

  • In this way, infrastructure installations are not delayed and sysadmins are not required to handle laborious manual procedures.

  • Through the entire application lifetime, from integration and testing to delivery and deployment, CI/CD relies on constant automation and continuous monitoring.

  • Developers no longer have to manually deploy and manage servers, operating systems, storage, and other infrastructure components each time they create or release an application thanks to IaC. You obtain a template for deployment when you code your infrastructure. Examples of infrastructure-as-code tools include AWS CloudFormation, Red Hat Ansible, Chef, Puppet, SaltStack and HashiCorp Terraform. Some tools rely on a domain-specific language (DSL), while others use a standard template format, such as YAML and JSON.


Declarative vs. imperative approaches in IAC

In a Declarative approach, you define the desired state of the system, including required resources and required properties, and the IaC tool configures it. The declarative approach also maintains a list of the current state of system objects, making infrastructure shutdown easier to manage.

Instead, the Imperative approach requires defining the specific commands required to achieve the desired configuration, and executing these commands in the correct order.



What is Terrafrom ?

By understanding IaC concept , in short we can define terraform as a tool to make template that can create infrastructure on cloud according to template specifications.

Hashicorp Configuration Language is a specialised programming language used by Terraform to define infrastructure (HCL). Typically, files with the tf extension contain HCL code. Any directory that has tf files and has been started using the init command, which creates Terraform caches and default local state, is considered a Terraform project.


The way that Terraform tracks the resources that are actually deployed in the cloud is through its state mechanism. For maximum redundancy and dependability, state is kept in backends (locally on disc, remotely on a file storage cloud service, or specifically designed state management software). Using project workspaces, you can have different states connected to the same configuration in the same backend. As a result, you can set up numerous unique instances of the same infrastructure. Every project has a default workspace that will be used unless you specifically create or move to another one.


Install Terraform

Currently are working with Terraform v1.2.8 , in Future there may be upgraded version


For Linux distributions


For Debian-based distributions


# Updating system

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

# Add HashiCorp GPG key.

wget -O- https://apt.releases.hashicorp.com/gpg | \
    gpg --dearmor | \
    sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
    
#  Verify keys fingerprint
    
gpg --no-default-keyring \ 
    --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
    --fingerprint
    
#  Add the official HashiCorp repository to your system
    
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
    https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
    sudo tee /etc/apt/sources.list.d/hashicorp.list
    
# Update and install
    sudo apt update
    sudo apt-get install terraform


For Red Hat-based distributions



# Updating system
    sudo yum install -y yum-utils

#  Add the official HashiCorp repository to your system
    sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

# install
    sudo yum -y install terraform


For Windows


Using Chocolatey package manager (Requires WSL enabled)


choco install terraform

#  View version
terraform -version


For Mac OS (using homebrew)


#  install the HashiCorp tap, a repository of all our Homebrew packages
    brew tap hashicorp/tap

# install Terraform
    brew install hashicorp/tap/terraform

#  update brew
    brew update

#  upgrade Terraform
    brew upgrade hashicorp/tap/terraform

For more information about installation visit here

Beginner Guide to Terraform


Understanding Terraform File Structure


Following is the directory structure of Terraform project named terraproject:


terraproject

|└───.terraform

| └───providers

| └───registry.terraform.io

| └───hashicorp

| └───aws

| └───3.74.1

| └───windows_amd64

|------------- .terraform.lock.hcl # contains code that tracks resources

|------------- .terraform.tfstate

|------------- .terraform.tfstate.backup

|------------- terraform.tfvars

|------------- terraform_example.tf # Actual Template File


Terraform Commands


The following are the most commonly used commands in Terraform


1. Initialize Terrafrom Project


terraform init

Prepare your working directory for other commands



2. Validate Terraform template


terraform validate

Check whether the configuration is valid



3. See changes need to be done to complies with template


terraform plan

Show changes required by the current configuration


4. Apply the changes


terraform apply

Create or update infrastructure



5. View the current state of changes


terraform show

Show the current state or a saved plan



6. Remove any changes done by terraform


terraform destroy

Destroy previously-created infrastructure



7. Install or update Modules in terraform


terraform get



Setting Up first Terraform project with AWS


Here we are creating our first terraform project with which we can set up our infrastructure on the cloud within just a few minutes without any human intervention

The first step of this process is to create a separate directory where all the files related to terraform could be stored.


mkdir Terraform-Project 
cd Terraform-Project


Then we need to initialize terraform in that directory so that terraform can perform operations and also can keep track of modules For that first, create the file with an extension of “tf” so that terraform can initialize modules and libraries according to the configuration in the file.


touch mytemplate.tf


After creating file, open it in any of the text-editor of your choice, here we are using VS-Code for the same, But one can use nano or vim as per their choice.

The first thing to add in terraform file is about adding Providers, here we are using AWS:


terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}
  

Also as per the use case, we can add more than one provider to that list.


terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    azure = {
      source  = "hashicorp/azuread"
      version = "~> 2.15.0"
    }
  }
}
  

After setting up providers, we can initialize terraform to fetch all the required files that will be used during the next phases


terraform init


The next task is to setup access keys for cloud providers so that terraform can access


provider "aws" {
  region = "us-east-2" # ohio region  
 
  access_key = "***ASKCKEKHHLAW*****"          # Do not Share with Anyone 
  secret_key = "****ZJ8v****/W7YnN2****4VIVvrTvyroq*****"
}
  
Do Not Share Credentials with Anyone , Also Do Not Push Code on Git public repo if Code contains any credentials


After setting up the provider configuration then we can set up resource template


Here we are setting a simple launch configuration for creating an EC2 instance


resource "aws_instance" "MyInstance" {    # resource type , resource name 
  ami           = "ami-0fb653ca2d3203ac1" # AMI Id from AWS , here for Ubuntu 20.04 LTS
  instance_type = "t2.micro"              # Instance Type
  key_name = "my-aws-keys"                # first generate then only can be used
  associate_public_ip_address = true      # By default , Public_Ip = false

# Run commands immediate after initialization
   user_data = <<-EOF
    !/bin/bash
    sudo apt-get update
   EOF

  tags = {
    Name = "SomeThing"                   # name for the instance
  }

}


After setting up resources , first, validate the code


terraform validate


If valid then view the resource plan to get an overview that what terraform is going to perform


terraform plan



If you are satisfied with the plan then we can apply the changes


terraform apply
#OR  
terraform apply -auto-approve


Using -auto-approve flag, we are giving terraform permission to create or update resources without explicitly asking for permission before the actual apply process



Outputs

Terraform provides a way to get the output of the resources created by Terraform. using this, we can get any property of the resource created by terraform and print it after successfully applying so that it can be referred for other purposes

E.g. We can print Instance IP so that we can use it to connect to the instance using SSH via its public IP or Domain Name


# output "NAME-To-DISPLAY" {
# description = "Some Description"
# value       = RESOURCE-NAME.RESOURCES-PROPERTY
# }

# E.g.
output "Public-ip" {
  description = "Public IP address of the EC2 instance "
  value       =  myinstance.public_ip
}
  

Variables

Terraform provides a way to define variables in the code so that we can use them in the code and can change them according to the use case By this , it increases the reusability of the code and also makes it more dynamic


Variables can be used multiple times in the code so that it reduces overhead of changing the code every time.


#   Syntax

#  variable "VAR-NAME" {
#   default = "VAR-VALUE"         # Optional
#   type    =  VALUE-TYPE    # Optional , e.g. string , number , bool , any(default) etc.
#   description = "Some Description"  # Optional 
# }

# E.g.

variable "myvar" {
  default = "InstanceA"
  type = string
  description = "value of myvar"
}

# list of variables E.g. 
variable "mylist" {
  default = ["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]
  type = list(string)
  description = "value of lists"
}

# also varibles can be defined in map , dictionary , set , touple ,etc.

# Touple E.g.   
variable "mytouple" {
  default = ["10.0.0.0/24" , true , 24 ]
  type = touple(string , bool , number)
  }
  

touple can have multiple values of different types.


We can use the variables in the code by using the syntax ${var.VARIABLE-NAME}.


# E.g.
resource "aws_vpc" "Myvpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    "Name" = var.myvar   # gets values as "InstanceA"
  }

}

# Accessing list variables
resource "aws_vpc" "Myvpc" {
  cidr_block =  var.mylist[1]  # gets values as "10.0.1.0/24"
}
  

variable also can be set with multiple ways.

  • using interactive mode at command line

if default value is not set then it will ask for the value of the variable at the time of apply interactively

  • using command line argument

we can pass the value of the variable at the time of apply using command line argument


terraform apply -var 'myvar=InstanceB'

# terraform apply -var 'VAR-NAME=VAR-VALUE'

# For multiple variables

terraform apply -var 'myvar=InstanceB' -var 'mylist=[1,2,3]'


  • as environment variable

we can also set the value of the variable as environment variable


export TF_VAR_myvar="InstanceC"
export TF_VAR_mylist="[1,2,3]"

# export TF_VAR_VAR-NAME="VAR-VALUE"

# then run the apply command

terraform apply


  • using terraform.tfvars file

We can make new file named terraform.tfvars and set the value of the variable in it.

File name can be anything with .tfvars extension but terraform will automatically read the file named terraform.tfvars otherwise we have to specify the file name using -var-file argument.



# terraform.tfvars

myvar = "InstanceD"


# if file name is not terraform.tfvars

terraform apply -var-file="myvars.tfvars"

# Otherwise

terraform apply


Best practice is to define variables in a separate file and then use it in the main file As in apply terraform try to build templates from every file available with .tf extension.

Sometimes , there may be multiple values for a variable so terraform uses precedence order to decide which value to use.


Following is the precedence order:

  1. var with command line argument

  2. Environment variables

  3. *.tfvars file

  4. other files like *.auto.tfvars or *.auto.tfvars.json


Sample Templates To Create Resource in AWS


VPC


resource "aws_vpc" "VPC-NAME" {
  cidr_block = "10.0.0.0/16"    # Keep mask lower to avoid IP exhaustion

  tags = {
    "Name" = "my-custom-vpc-name"
  }

}


Subnet


resource "aws_subnet" "my-custom-subnet" {
  vpc_id     = aws_vpc.customvpc.id      # getting ref id of the vpc 

# id vpc_id not mentioned then it will create subnet in default vpc of that region

  cidr_block = "10.0.1.0/24"   # shold be in the range of vpc cidr block

  availability_zone = "us-east-2a"       # availability zone of the subnet 

  tags = {
    "Name" = "my-custom-subnet-name"
  }

}


Internet Gateway


resource "aws_internet_gateway" "my-custom-ig" {
  vpc_id = aws_vpc.customvpc.id 

  # getting ref id of the vpc   
  # if not mentioned then it will create ig in default vpc of that region

  tags = {
    "Name" = "my-custom-ig-name"
  }

}


Route Table


resource "aws_route_table" "my-custom-rt" {
  vpc_id = aws_vpc.customvpc.id

  route {
    cidr_block = "0.0.0.0/0"     # for all vpc traffic to access the internet
    # cidr_block = "10.0.1.0/24"       # only specific subnet traffic to access the internet
    gateway_id = aws_internet_gateway.my-custom-ig.id     # getting ref id of the ig
  }



  tags = {
    Name = "my-custom-rt-name"
  }
}


# Associate the route table with the subnet

resource "aws_route_table_association" "my-custom-rt-association" {
  subnet_id      = aws_subnet.my-custom-subnet.id
  route_table_id = aws_route_table.my-custom-rt.id
}
  

Security Group


resource "aws_security_group" "sg-custom" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = aws_vpc.customvpc.id

# inbound traffic  , from internet to instance

# for port range
  ingress { 
    description = "TLS from VPC"
    from_port   = 0
    to_port     = 1024
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

  }

# for All ports
   ingress {   
    from_port        = 0
    to_port          = 0
    protocol         = "-1" # -1 means all protocols
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

# for specific port
    ingress {
    description = "Allow HTTPs"
    from_port   =  443
    to_port     =  443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

    # outbound traffic  , from instance to internet
  egress { 
    from_port        = 0
    to_port          = 0
    protocol         = "-1" # -1 means all protocols
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = "allow_all_traffic"
  }
}
  

EC2 Instance


resource "aws_instance" "MyInstance" {      
  ami           = "ami-0fb653ca2d3203ac1"    
  instance_type = "t2.micro"                               
  key_name = "rahulkeyaws"   # first generate then only can be used

# we can add multiple security groups
  vpc_security_group_ids = [ aws_security_group.sg-webserver.id ]

# assigning subnet to the instance
subnet_id  = aws_subnet.my-custom-subnet.id

# assigning public ip to the instance,  if want to use elastic ip then set to false
  associate_public_ip_address = true       # By default , Public_Ip = false

# Run commands immediate after initialization
   user_data = <<-EOF
    !/bin/bash
    sudo apt-get update
   EOF

   set IAM user role to the instance
  iam_instance_profile = aws_iam_instance_profile.my-custom-iam-profile.name  # or we can use name directly if already created



  tags = {
    Name = var.myvar
  }

}


EBS Volume


resource "aws_ebs_volume" "data-vol" {
 availability_zone = "us-east-2a"
 size = 10      # in GB like 10 GB
 tags = {
        Name = "data-volume"
 }

}

# Attaching EBS Volume to EC2 Instance

resource "aws_volume_attachment" "vattach" {
 device_name = "/dev/sdc"
 volume_id = "${aws_ebs_volume.data-vol.id}"
 instance_id = "${aws_instance.MyInstance.id}"
}
 

Elastic IP


resource "aws_eip" "lb" {
  vpc      = true
  instance = aws_instance.MyInstance.id
   associate_with_private_ip = aws_instance.MyInstance.private_ip
  depends_on     = [aws_internet_gateway.my-custom-ig]
}
  

S3 Bucket


 # Creating private S3 Bucket

 resource "aws_s3_bucket" "b" {
  bucket = "my-tf-test-bucket"
  acl    = "private"        # by default it is private

  # Versioning is not enabled by default

  versioning {
    enabled = true
  }

  tags = {
    Name        = "My bucket"
    Environment = "Dev"
  }
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.b.id
  acl    = "private"
}

# Creating public S3 Bucket with static website hosting

resource "aws_s3_bucket" "b" {
  bucket = "s3-website-test.hashicorp.com"
  acl    = "public-read"
  policy = file("policy.json")

  website {
    index_document = "index.html"
    error_document = "error.html"

    routing_rules = <<EOF
[{
    "Condition": {
        "KeyPrefixEquals": "docs/"
    },
    "Redirect": {
        "ReplaceKeyPrefixWith": "documents/"
    }
}]
EOF
  }
}
  

For more details, refer Terraform AWS Provider



Final Conclusion

As you can see, Terraform offers a wide range of features to assist with deployment, maintenance, and interaction with your infrastructure. Think hard about how you want to group resources into different native modules and you'll find that Terraform is easy to use and maintain. Try to keep your Terraform code as DRY as possible and leverage automation to manage applications when dealing with a larger group of collaborators


Keep Learning and Keep Sharing !! 😇😎📓🖥️
17 views0 comments

Recent Posts

See All