INFRASTRUCTURE:
resources used to run our application on cloud.
ex: ec2, s3, elb, vpc, Asg --------------
in general we used to deploy infra on manual
Manual:
1. time consume
2. Manual work
3. committing mistakes
Automate -- > Terraform -- > code -- > hcl (Hashicorp configuration languge)
its a tool used to make infrastructure automation.
its a free and not open source.
its platform independent.
it comes on the year 2014.
who: Mitchel Hashimoto
owned: Hashicorp -- > recently IBM is maintaining.
terraform is written on the go language.
We can call terraform as IAC TOOL.
HOW IT WORKS:
terraform uses code to automate the infra.
we use HCL : HashiCorp Configuration Language.
WRITE
PLAN
APPLY
IAC: Infrastructure as a code.
Code --- > execute --- > Infra
ADVANTAGES:
1. Reusable
2. Time saving
3. Automation
4. Avoiding mistakes
5. Dry run
DRY: DONT REPEAT YOURSELF
CLOUD ALTERNATIVES:
CFT = AWS
ARM = AZURE
GDE = GOOGLE
TERRAFROM = ALL CLOUDS
SOME OTHER ALTERNATIVES:
PULUMI
ANSIBLE
CHEF
PUPPET
OpenTofu
TERRAFORM VS ANSIBLE:
Terraform will create server
and these servers will be configure by ansible.
Terraform can be used for non cloud & on-premises infrastructure.
While Terraform is known for being cloud-agnostic and supporting public clouds such as AWS, Azure,
GCP, it can also be used for on-prem infrastructure including VMware vSphere and OpenStack.
INSTALLING TERRAFORM:
sudo yum install -y yum-utils shadow-utils
sudo yum-config-manager --add-repo
https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install terraform
configure role with full access.
MAIN ITEMS IN FILE:
blocks
lables (name, type of resource)
arguments
Configuration files:
it will have resource configuration.
here we write inputs for our resource
based on that input terraform will create the real world resources.
extension is .tf
mkdir terraform
cd terraform
vim main.tf
provider "aws" {
region = "us-east-1"
resource "aws_instance" "one" {
ami = "ami-03eb6185d756497f8"
instance_type = "t2.micro"
I : INIT
P : PLAN
A : APPLY
D : DESTROY
TERRAFORM COMMANDS:
terraform init : initialize the provider plugins on backend
it will store information of plugins in .terraform folder
without plugins we cant create resources.
each provider will have its own plugins.
we need to download plugins only once.
terraform plan : to create an execution plan
it will take inputs given by users and plan the resource creation
if we haven't given inputs for few fields it will take default values.
terraform apply : to create resources
as per the given inputs on configuration file it will create the resources in real word.
terrafrom destroy : to delete resources
provider "aws" {
region = "us-east-1"
resource "aws_instance" "one" {
count = 5
ami = "ami-03eb6185d756497f8"
instance_type = "t2.micro"
terraform apply --auto-approve
terraform destroy --auto-approve
STATE FILE: used to store the resource information which is created by terraform
to track the resource activities
in real time entire resource info is on state file.
we need to keep it safe & Secure
if we lost this file we cant track the infra.
Command:
terraform state list
terraform target: used to create/destroy the specific resource
terraform state list
single target: terraform destroy -auto-approve -target="aws_instance.one[3]"
multi targets: terraform destroy --auto-approve -target="aws_instance.one[1]" -
target="aws_instance.one[2]"
TERRAFORM VARIABLES:
in real time we keep all the variables in variable.tf to maintain the variables easily.
main.tf
provider "aws" {
region = "us-east-1"
resource "aws_instance" "one" {
count = var.instance_count
ami = "ami-0b41f7055516b991a"
instance_type = var.instance_type
variable.tf
variable "instance_type" {
description = "*"
type = string
default = "t2.micro"
variable "instance_count" {
description = "*"
type = number
default = 2
terraform apply --auto-approve
terraform destroy --auto-approve
TERRAFORM FMT:
used to give alignment and indentation for terraform files.
=================================================================
Terraform tfvars:
When we have multiple configurations for terraform to create resource
we use tfvars to store different configurations.
on execution time pass the tfvars to the command it will apply the values of that file.
cat main.tf
provider "aws" {
region = "us-east-1"
resource "aws_instance" "one" {
count = var.instance_count
ami = "ami-0e001c9271cf7f3b9"
instance_type = var.instance_type
tags = {
Name = var.instance_name
cat variable.tf
variable "instance_count" {
variable "instance_type" {
}
variable "instance_name" {
cat dev.tfvars
instance_count = 1
instance_type = "t2.micro"
instance_name = "dev-server"
cat test.tfvars
instance_count = 2
instance_type = "t2.medium"
instance_name = "test-server"
cat prod.tfvars
instance_count = 3
instance_type = "t2.large"
instance_name = "prod-server"
TERRAFORM CLI:
cat main.tf
provider "aws" {
}
resource "aws_instance" "one" {
ami = "ami-00b8917ae86a424c9"
instance_type = var.instance_type
tags = {
Name = "raham-server"
cat variable.tf
variable "instance_type" {
METHOD-1:
terraform apply --auto-approve
terraform destroy --auto-approve
METHOD-2:
terraform apply --auto-approve -var="instance_type=t2.micro"
terraform destroy --auto-approve -var="instance_type=t2.micro"
NOTE: If you want to pass single variable from cli you can use -var or if you want to pass multiple
variables from cli create terraform .tfvars files and use -var-file.
TERRAFORM ENV VARIABLES:
export TF_VAR_instance_count=1
export TF_VAR_instance_name="dummy"
export TF_VAR_instance_type="t2.micro"
TERRAFORM VARIABLE precedence:
cat main.tf
provider "aws" {
region = "us-east-1"
resource "aws_instance" "one" {
ami = var.ami
instance_type = "t2.micro"
tags = {
Name = "raham"
cat variable.tf
variable "ami" {
default = ""
TERRAFORM OUTPUTS:
Whenever we create a resource by Terraform if you want to print any output of that resource we can
use the output block this block will print the specific output as per our requirement.
provider "aws" {
resource "aws_instance" "one" {
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
tags = {
Name = "raham-server"
output "raham" {
value = [aws_instance.one.public_ip, aws_instance.one.private_ip, aws_instance.one.public_dns]
TO GET COMPLTE OUTPUS:
output "raham" {
value = aws_instance.one
Note: when we change output block terraform will execute only that block
remianing blocks will not executed because there are no changes in those blocks.
TERRAFORM TAINT & UNTAINT:
it is used to recreate specific resources in infrastructure.
Why:
if i have an ec2 -- > crashed
ec2 -- > code -- > main.tf
now to recreate this ec2 seperately we need to taint the resource
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "one" {
ami = "ami-0195204d5dce06d99"
instance_type = "t2.micro"
tags = {
Name = "raham"
resource "aws_s3_bucket" "two" {
bucket = "rahamshaik8e3huirfh9uf2f"
terraform apply --auto-approve
terraform state list
terraform taint aws_instance.one
terraform apply --auto-approve
TO TAINT: terraform taint aws_instance.one
TO UNTAINT: terraform untaint aws_instance.one
TERRAFORM REPLACE:
terraform apply --auto-approve -replace="aws_instance.one[0]"
============================================================================
TERRAFORM LOCALS: its a block used to define values
once you define a value on this block you can use them multiple times
changing the value in local block will be replicated to all resources.
simply define value once and use for multiple times.
provider "aws" {
locals {
env = "prod"
resource "aws_vpc" "one" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "${local.env}-vpc"
resource "aws_subnet" "two" {
vpc_id = aws_vpc.one.id
cidr_block = "10.0.0.0/24"
tags = {
Name = "${local.env}-subnet"
resource "aws_instance" "three" {
subnet_id = aws_subnet.two.id
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
key_name = "jrb"
tags = {
Name = "${local.env}-server"
Note: values will be updated when we change them on same workspace.
WORKSPACES:
it is used to create infra for multiple env
it will isolate each env
if we work on dev env it wont affect test env
the default workspace is default
all the resource we create on terraform by default will store on default workspace
all workspace state files will be stored on terraform.tfstate.d folder
terraform workspace list : to list the workspaces
terraform workspace new dev : to create workspace
terraform workspace show : to show current workspace
terraform workspace select dev : to switch to dev workspace
terraform workspace delete dev : to delete dev workspace
NOTE:
1. we need to empty the workspace before delete
2. we cant delete current workspace, we can switch and delete
3. we cant delete default workspace
EXECUTION:
cat main.tf
provider "aws" {
region = "us-east-1"
resource "aws_instance" "three" {
count = var.instance_count
ami = "ami-03eb6185d756497f8"
instance_type = var.instance_type
tags = {
Name = var.instance_name
cat variable.tf
variable "instance_count" {
variable "instance_type" {
variable "instance_name" {
cat dev.tfvars
instance_count = 1
instance_type = "t2.micro"
instance_name = "dev-server"
cat test.tfvars
instance_count = 2
instance_type = "t2.medium"
instance_name = "test-server"
cat prod.tfvars
instance_count = 3
instance_type = "t2.large"
instance_name = "prod-server"
terraform workspace new dev
terraform apply -auto-approve -var-file="dev.tfvars"
terraform workspace new test
terraform apply -auto-approve -var-file="test.tfvars"
terraform workspace new prod
terraform apply -auto-approve -var-file="prod.tfvars"
s3 Backend setup:
by default state file will be stored on locally in terraform.tfstate file.
if we store state file locally only one person can access.
so if we want to give access to others we need to create backend setup.
it will store terraform statefile in bucket.
when we modify the infra it will update the statefile in bucket.
EXAMPLES:
S3
K8S
CONSUL
AZURE
TERRAFORM CLOUD
NOTE: GITHUB IS NOT SUPPORTED AS BACKEND.
why: state file is very imp in terraform
without state file we cant track the infra
if you lost it we cant manage the infra
backup file is a backup of the terraform. tfstate file. Terraform automatically creates a backup of the
state file before making any changes to the state file. This ensures that you can recover from a
corrupted or lost state file.
terraform state list : to list the resources
terraform state show aws_subnet.two : to show specific resource info
terraform state mv aws_subnet.two aws_subnet.three : to move state info from one to another
terraform state rm aws_subnet.three : to remove state information of a resource
terraform state pull : to pull state file info from backend
terraform init -migrate-state
CODE:
provider "aws" {
region = "us-east-1"
terraform {
backend "s3" {
bucket = "terrastatebyucket007"
key = "terraform.tfstate"
region = "us-east-1"
locals {
env = "${terraform.workspace}"
resource "aws_vpc" "one" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "${local.env}-vpc"
resource "aws_subnet" "two" {
vpc_id = aws_vpc.one.id
cidr_block = "10.0.0.0/24"
tags = {
Name = "${local.env}-subnet"
}
resource "aws_instance" "three" {
subnet_id = aws_subnet.two.id
ami = "ami-0e001c9271cf7f3b9"
instance_type = "t2.micro"
tags = {
Name = "${local.env}-server"
========================================================================
META ARGUMENTS:
DEPENDS_ON: One resource creation depends on another resource.
used to manage dependencies of resources.
EXPLICIT DEPENDENCY: when we use depends_on
IMPLICIT DEPENDENCY: when we dont use depends_on
provider "aws" {
provider "aws" {
region = "us-east-1"
resource "aws_instance" "two" {
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
tags = {
Name = "raham-server"
resource "aws_s3_bucket" "one" {
bucket = "dummyawsbuckeet0088ndehd"
depends_on = [aws_instance.two]
COUNT: count is to create identical objects which is having same configuration.
provider "aws" {
resource "aws_instance" "three" {
count =3
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.medium"
tags = {
Name = "dev-server" (TO GIVE NUMBERS ADD -- > -${count.index+1})
provider "aws" {
resource "aws_instance" "three" {
count = length(var.instance_type)
ami = "ami-00b8917ae86a424c9"
instance_type = var.instance_type[count.index]
tags = {
Name = var.instance_name[count.index]
variable "instance_type" {
default = ["t2.micro", "t2.medium", "t2.large"]
variable "instance_name" {
default = ["dev-server", "test-server", "prod-server"]
FOR_EACH:
resource "aws_instance" "two" {
for_each = toset(["dev-server", "test-server", "prod-server"])
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
tags = {
Name = "${each.key}"
LIFECYCLE: to change the resource behaviours.
PREVENT DESTROY: used to prevent the resources from detroying.
provider "aws" {
resource "aws_instance" "two" {
ami = "ami-0d7a109bf30624c99"
instance_type = "t2.nano"
tags = {
Name = "lucky-server"
lifecycle {
prevent_destroy = true
CREATE BEFORE DESTROY:
If we want to recreate any object in terraform. first of all terraform will destroy the existing object
and then it will create the new object.
it will create new replacement object is created first, & destroyed the existing resource.
NOTE: Change ami-id and run apply to see the changes
provider "aws" {
resource "aws_instance" "two" {
ami = "ami-0d7a109bf30624c99"
instance_type = "t2.nano"
tags = {
Name = "lucky-server"
lifecycle {
create_before_destroy = true
IGNORE CHANGES: Whenever we do any changes to the infrastructure manually if I run terraform
plan or if I run terraform apply the values will be taken to the terraform state if I want to ignore the
manual changes made to my infrastructure we can use ignore changes.
NOTE: It is mainly used to ignore the manual changes applied to the infrastructure if you apply any
change to the existing infrastructure manually terraform will completely ignore during the runtime
provider "aws" {
resource "aws_instance" "two" {
ami = "ami-0d7a109bf30624c99"
instance_type = "t2.nano"
tags = {
Name = "lucky-server"
lifecycle {
ignore_changes = all
======================================================================
Providers:
Terraform will support thousands of providers in real time but among them we are not going to use
some specific providers which is going to maintain by community.
1. OFFICIAL: Maintain by Terraform
2. Partner: Written & Maintain by third party
3. Community: Maintain by Individuals.
GITHUB:
provider "github" {
token = "***********************"
resource "github_repository" "example_repo" {
name = "example-repo"
description = "This is an example repository created with Terraform"
LOCAL:
provider "local" {
resource "local_file" "one" {
filename = "abc.txt"
content = "hai all my file is created by terraform"
}
NOTE: For every provider in Terraform we need to download the plugins by running terraform init.
every provider plugins will store on .terraform folder.
all existing providers will not store locally by default,
if we download them then only plugins will store locally.
VERSION CONSTRAINTS:
we can change the versions of provider plugins.
Whenever we have new changes on the aws console the old code might not work so if you want to
work with the new code window download the new provider plugins for the new code in real time
we update the plugins based upon our requirement.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.41.0"
terraform {
required_providers {
local = {
source = "hashicorp/local"
version = "2.2.2"
}
NOTE: in signle file we can write multiple terraform providers code.
but we cant write provider updating code.
terraform import: with this import command if we create a resource manually we can manage the
resource from terraform.
import {
to = aws_instance.example
id = var.instance_id
terraform plan-generate-config-out=ec2.tf
terraform apply
TERRAFORM REFRESH:
it will store the values when compared with real world infrastructure when we modified the
terraform values in real world infrastructure it does not replicate to state file
so we need to run the command called Terraform Refresh it will refresh the state file while refreshing
state file it will compare original values with the state file values if the original values are modified or
change it it will be replicated to state file after running terraform refresh command.
terraform refresh
when we run plan, apply or destroy refresh will perform automatically.
Note: change somethinng maually and check it
DISADVATAGE: Sometimes it will delete all of the existing infrastructure due to some small sort of
changes so in real time we never run this command manually.
TERRAFORM MODULES:
used for reusable.
it divides the code into folder structure.
A module that has been called by another module is often referred to as a child module.
we can publish modules for others to use, and to use modules that others have published.
These modules are free to use, and Terraform can download them automatically if you specify the
appropriate source and version in a module call block.
PATG FOR MODULES PLGINS: .terraform/modules/
cat main.tf
provider "aws" {
module "my_instance" {
source = "./modules/instances"
module "s3_module" {
source = "./modules/buckets"
mkdir -p modules/instances
mkdir -p modules/buckets
cat modules/buckets/main.tf
resource "aws_s3_bucket" "abcd" {
bucket = "devopsherahamshaik0099889977"
}
cat modules/instance/main.tf
resource "aws_instance" "three" {
count =2
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.medium"
key_name = "yterraform"
tags = {
Name = "n.virginia-server"
terraform fmt -recursive : used to apply format for files on all folders
===========================================================================
DYNAMIC BLOCK: it is used to reduce the length of code and used for reusabilty of code in loop.
provider "aws" {
locals {
ingress_rules = [{
port = 443
description = "Ingress rules for port 443"
},
port = 80
description = "Ingree rules for port 80"
},
port = 8080
description = "Ingree rules for port 8080"
}]
resource "aws_instance" "ec2_example" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "Terraform EC2"
resource "aws_security_group" "main" {
egress = [
cidr_blocks = ["0.0.0.0/0"]
description = "*"
from_port =0
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "-1"
security_groups = []
self = false
to_port =0
}]
dynamic "ingress" {
for_each = local.ingress_rules
content {
description = "*"
from_port = ingress.value.port
to_port = ingress.value.port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
tags = {
Name = "terra sg"
PROVISIONERS: used to execute commands or scripts in terraform managed resources on both local
and remote.
LOCAL-EXEC: used to execute a command or script on local machine (where terraform is installed)
it will execute the command when resource is created
provider "aws" {
resource "aws_instance" "one" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
tags = {
Name = "rahaminstance"
}
provisioner "local-exec" {
command = "echo my name is raham"
remote exec: is used to run the commands on remote servers.
once the server got created it will execute the commands and scripts for installing the softwares and
configuring them and even for deployment also.
provider "aws" {
resource "aws_instance" "one" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
key_name = "yterraform"
tags = {
Name = " rahaminstance"
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install git maven tree httpd -y",
"touch file1"
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
TERRAFORM CLOUD: used to create resourec form gui.
1. create account
2. create organization
3. create workspace
4. add vsc -- > GitHub -- > username & password -- > select repo
5. start new plan
7. variables -- > add var -- > env vars -- >
TERRAFORM VAULT: used to produce the dyamic secrets.DYNAMIC BLOCK: it is used to reduce the
length of code and used for reusabilty of code in loop.
provider "aws" {
locals {
ingress_rules = [{
port = 443
description = "Ingress rules for port 443"
},
port = 80
description = "Ingree rules for port 80"
},
port = 8080
description = "Ingree rules for port 8080"
}]
resource "aws_instance" "ec2_example" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "Terraform EC2"
resource "aws_security_group" "main" {
egress = [
cidr_blocks = ["0.0.0.0/0"]
description = "*"
from_port =0
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "-1"
security_groups = []
self = false
to_port =0
}]
dynamic "ingress" {
for_each = local.ingress_rules
content {
description = "*"
from_port = ingress.value.port
to_port = ingress.value.port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
tags = {
Name = "terra sg"
PROVISIONERS: used to execute commands or scripts in terraform managed resources on both local
and remote.
LOCAL-EXEC: used to execute a command or script on local machine (where terraform is installed)
it will execute the command when resource is created
provider "aws" {
resource "aws_instance" "one" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
tags = {
Name = "rahaminstance"
provisioner "local-exec" {
command = "echo my name is raham"
remote exec: is used to run the commands on remote servers.
once the server got created it will execute the commands and scripts for installing the softwares and
configuring them and even for deployment also.
provider "aws" {
resource "aws_instance" "one" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
key_name = "yterraform"
tags = {
Name = " rahaminstance"
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install git maven tree httpd -y",
"touch file1"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
TERRAFORM CLOUD: used to create resourec form gui.
TERRAFORM CLOUD: used to create resourec form gui.
create account
create organization
Create a new Workspace
Version Control Workflow
GitHub -- > user name & password -- > select repo
new run
fail
add variables like keys
new run
TERRAFORM VAULT: used to produce the dyamic secrets.
TERRAFORM MAP:
ITS A VARIABE TYPE USED TO ASSING KEY & VALE PAIR FOR RESOURCE.
provider "aws" {
resource "aws_instance" "ec2_example" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
tags = var.instance_tags
variable "instance_tags" {
type = map(any)
default = {
Name = "app-server"
Env = "dev"
Client = "swiggy"
=================================================
SONARQUBE:
install SonarQube from below script:
https://github.com/RAHAMSHAIK007/all-setups.git
port: 9000
generate a token
add project -- > manual -- > name -- > token -- >
1. download plugin and restart Jenkins
2. configure tool
dashboard -- > manage Jenkins -- > system -- >SonarQube -- > name: sonarube & url: ----- & add
secret text.
create a project in SonarQube
generate a token and give to Jenkins.
3. configure maven tool
dashboard -- > tools -- > maven -- > name: maven -- > save
CODE:
node {
stage('checkout') {
git 'https://github.com/devopsbyraham/jenkins-java-project.git'
stage('build') {
sh 'mvn compile'
stage('test') {
sh 'mvn test'
stage('artifact') {
sh 'mvn package'
stage("code quality") {
withSonarQubeEnv('sonarqube')
def mavenHome = tool name: "maven", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}
K8SGPT:
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_amd64.deb
sudo dpkg -i k8sgpt_amd64.deb
CONFIGURE:
k8sgpt generate
COPY PASTE ON BROWSER: https://beta.openai.com/account/api-keys
generate a token
k8sgpt auth add --backend openai --model gpt-3.5-turbo
copy paste the token
kubectl config current-context
K8sgpt analyze
K8sgpt analyze -o json