0% found this document useful (0 votes)
28 views11 pages

LTImindTree 1st

The document consists of a series of questions and answers related to Azure DevOps, CI/CD pipelines, Terraform, and Azure infrastructure management. It covers topics such as creating and maintaining pipelines, using Azure-hosted vs. self-hosted agents, optimizing variables, and managing Terraform state files. Additionally, it provides detailed steps for upgrading Azure VMs, designing scalable infrastructure, and setting up alerts for application downtime.

Uploaded by

talentedtejazero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views11 pages

LTImindTree 1st

The document consists of a series of questions and answers related to Azure DevOps, CI/CD pipelines, Terraform, and Azure infrastructure management. It covers topics such as creating and maintaining pipelines, using Azure-hosted vs. self-hosted agents, optimizing variables, and managing Terraform state files. Additionally, it provides detailed steps for upgrading Azure VMs, designing scalable infrastructure, and setting up alerts for application downtime.

Uploaded by

talentedtejazero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

LTIMindTree

Questions:

1. So, can you tell me about your road trip, responsibilities, your regular activities?
2. Coming to the CICD pipeline, so did you get a chance to create a pipeline from the
scratch or it was you who were maintaining the pipeline?
3. While creating a pipeline, what are the different agents we have in Azure DevOps to
run those pipelines?
4. Tell me about Self-hosted and Azure-hosted Agent? what is the difference between
them and in which scenario we can use these agents?
5. We are having around 55 lines for each of the agents. Mostly, all of them are using
the same set of variables like subscription or location or some other values. So, I
don't want to write any values in each and every pipeline. So, how would you
optimise this?
6. I am having a deep pipeline. So, where I will be deploying into the dev, QA, UAT and
Prod. once the dev deployment is completed successfully, I need my senior architect
to just review and approve it. Once it is done, the changes have to be deployed in
the other environment. So, how many consequences can be in the pipeline?
7. What is the difference between git pull and git fetch?
8. What is the git stash command used for?
9. What is the state file in terraform?
10. What is the best approach to store the state file?
11. I have a team. So, I don't want any of them to go and destroy the resources which
are created from the terraform. How will you ensure this in your configuration file?
12. I want to create a virtual machine in a resource group, my resource group already
exists in the Azure portal. Configuration file should actually refer to that existing
resource group and then want to create a VM set. So, how can you make that
reference in your main.tf file?
13. I want to create almost around 50 storage accounts for my project requirement.
With almost same SKU. So, how can you do that in your configuration file?
14. Where in the main.tf I am having the location value as in east US and also I am
having a variable.tf where I am giving a default value as in west US and also we are
having a terraform.tfvars file. Under the location value While you are applying the
terraform so what variable value will it take?
15. What are the workspaces in terraform?
16. I have my VM with the standard V2 series and I have to upgrade that to a V series.
So, how will you do that and what will you do to reduce the downtime of the VM and
what happens to the application in that VM?
17. We need to deploy a web app which actually requires high availability and scalability
So, in this case, how will you design your VM infrastructure?
18. What is the difference between site-to-site and point-to-site VPN connection?
19. While creating a storage account what are the different types of tiers we can able to
choose?
20. I have to create an alert, so if my application which is posted in my VM is down, I
want my group to get notified. How will you configure this?

Answers:-

2. Did you create the pipeline from scratch or just maintain it?
Yes, I have created Azure DevOps pipelines from scratch using YAML. I defined the entire
CI/CD flow including build, test, publish artifacts, and deploy stages for multiple
environments (Dev, QA, UAT, Prod). I also worked on pipeline optimizations, used
templates, variables, and service connections, and managed approvals and gates.

3. What are the different agents in Azure DevOps?


Azure DevOps supports two types of agents:
 Azure-hosted agents: Preconfigured VMs provided by Microsoft with most tools pre-
installed (like Azure CLI, Terraform, etc.).
 Self-hosted agents: Custom agents you configure on your own machines or VMs with
complete control over tools and versions.

4. Self-hosted vs Azure-hosted agents


Feature Azure-hosted Agent Self-hosted Agent
Maintained by Microsoft You
Cost Free with limited minutes Your infra cost
Pre-installed tools Yes No (you install what you need)
Custom software Limited Full control
Ideal for Simplicity & quick setup Special environments, long-running jobs
Use self-hosted when:
 You need specific tool versions or libraries.
 You need to access on-prem resources.
 You want better performance for large projects.

5. Optimizing common variables


Use a variable group in Azure DevOps Library and link it to all pipelines.
Steps:
1. Go to Pipelines > Library.
2. Create a Variable Group (e.g., SharedVariables).
3. Define variables like subscription_id, location, etc.
4. Link it in the YAML using:
5. variables:
6. - group: SharedVariables
This avoids repetition and centralizes variable management.

6. Deployment flow with approvals


You should use multi-stage YAML pipelines with environment approvals:
Example stages:
stages:
- stage: Dev
...
- stage: QA
...
- stage: UAT
...
- stage: Prod
...
Use Environments in Azure DevOps with pre-deployment approvals (assigned to your senior
architect) before proceeding to the next stage.

7. Difference between git pull and git fetch


 git fetch: Downloads latest changes from remote but doesn’t merge.
 git pull: Fetches and then merges changes into your current branch.

8. What is git stash?


Temporarily saves your uncommitted changes and reverts to a clean working directory. You
can reapply them later using git stash apply or pop.

9. What is Terraform state file?


The state file (terraform.tfstate) records the current infrastructure as known by Terraform.
It's crucial for comparing planned changes.

10. Best approach to store state file


Use remote backend like:
 Azure Blob Storage for locking & versioning.
 Configure in backend "azurerm" block.
Example:
terraform {
backend "azurerm" {
resource_group_name = "tf-backend-rg"
storage_account_name = "tfterraformstate"
container_name = "tfstate"
key = "dev.terraform.tfstate"
}
}
Enable state locking with Azure Blob’s container-level lease.

11. Prevent Terraform destroy


You can use the following:
1. prevent_destroy lifecycle rule:
2. resource "azurerm_resource_group" "example" {
3. name = "example-rg"
4. location = "East US"
5.
6. lifecycle {
7. prevent_destroy = true
8. }
9. }
10. RBAC: Give team read/write but not delete permissions in Azure or restrict terraform
destroy with CI/CD pipeline policies.

12. Refer to existing resource group in main.tf


Use data block:
data "azurerm_resource_group" "existing" {
name = "existing-rg-name"
}

resource "azurerm_linux_virtual_machine" "example" {


name = "vm-example"
resource_group_name = data.azurerm_resource_group.existing.name
...
}

13. Create 50 storage accounts


Use count:
resource "azurerm_storage_account" "storage" {
count = 50
name = "mystorage${count.index}"
resource_group_name = "rg-name"
location = "East US"
account_tier = "Standard"
account_replication_type = "LRS"
}

14. Variable precedence: location


Order of precedence (highest to lowest):
1. -var or -var-file in CLI
2. terraform.tfvars
3. *.auto.tfvars
4. variable default in variables.tf
So in your case: terraform.tfvars overrides both variables.tf and main.tf.

15. Terraform workspaces


Workspaces allow isolated state files within the same config:
 Useful for multi-env (dev, qa, prod)
 terraform workspace new dev
 terraform workspace select qa
Each workspace has its own .tfstate.

16. Upgrade VM to new series with minimal downtime


1. Stop the VM (or create a clone).
2. Update vm_size in Terraform:
3. vm_size = "Standard_D2_v3"
4. Run terraform apply.
To reduce downtime:
 Use Availability Sets/Zones for redundancy.
 Schedule maintenance during low-traffic.
 Ensure app supports graceful shutdown.

✅ Step-by-Step Process to Upgrade Azure VM (Standard V2 → V Series)

🔹 Step 1: Check Compatibility


Before making changes:
1. Validate region support for V-series VMs:
o Not all VM sizes are available in all regions.
o Run:
o az vm list-sizes --location <region> --output table
2. Check OS disk type and existing features (e.g., Availability Set, Accelerated
Networking) — not all may be supported across VM families.

🔹 Step 2: Prepare the Application and OS


To avoid data loss or downtime:
 Backup the VM: Take a snapshot or create a managed image:
 az vm deallocate --name <vm-name> --resource-group <rg-name>
 az vm generalize --name <vm-name> --resource-group <rg-name>
 az image create --resource-group <rg-name> --name <image-name> --source <vm-
name>
 If the application allows, configure it to handle a restart or temporary disconnection
(stateless apps do best here).

🔹 Step 3: Stop (Deallocate) the VM


Azure only allows size change when the VM is deallocated.
az vm deallocate --name <vm-name> --resource-group <rg-name>
💡 Deallocation stops billing for the compute portion but retains storage.

🔹 Step 4: Update the VM Size


Change the VM size to a V-series one (e.g., Standard_D4_v5, Standard_E4_v5, etc.):
az vm resize \
--resource-group <rg-name> \
--name <vm-name> \
--size Standard_D4_v5
🔹 Step 5: Start the VM
Once the size is changed:
az vm start --resource-group <rg-name> --name <vm-name>

🔹 Step 6: Verify Application


Once the VM is running:
 SSH or RDP into the VM.
 Verify that your application services are running.
 Check logs for any issues from the restart.

Downtime Considerations
 Downtime duration depends on the app startup time + time to deallocate, resize,
and restart the VM (usually a few minutes).
 To reduce downtime:
o Use Availability Sets or Zones and deploy an upgraded replica VM in parallel.
o Do a blue-green deployment (create a new VM and switch traffic using a
Load Balancer).
o If app is containerized, consider moving to VMSS (Virtual Machine Scale Set)
or AKS for dynamic scaling.

✅ 1. Upgrade with Availability Set / Zone + Parallel Replica


🔹 Why?
Run multiple instances in parallel to prevent downtime. One instance can be upgraded while
others serve traffic.
🔹 How to do it?
Step-by-step:
1. Place existing VM in an Availability Set (only possible during creation).
o If not already in one, you’ll need to recreate the VM from image/snapshot in
an availability set.
2. Create a new VM with upgraded size (V-series), in the same Availability Set or Zone.
bash
CopyEdit
az vm create \
--resource-group <rg> \
--name <new-vm> \
--image <image-name> \
--availability-set <avset-name> \
--size Standard_D4_v5 \
--admin-username azureuser \
--generate-ssh-keys
3. Use a Load Balancer to balance traffic across the original and upgraded VM.
bash
CopyEdit
az network lb address-pool address add \
--lb-name <lb-name> \
--resource-group <rg> \
--backend-address-pool-name <pool> \
--vm <new-vm>
4. Test on new VM → remove the old VM once validated.

💡 What Happens to the Application?


 The application will stop running during the resize process.
 Once the VM restarts, apps set as services (or in systemd/Windows services) will
automatically restart.
 If your app depends on runtime configs, validate startup scripts or use tools like
Azure Custom Script Extension.
 Use Azure Application Insights or Log Analytics to monitor downtime.
17. Highly available and scalable VM design
Use:
 Availability Zones or Availability Sets for HA.
 Load Balancer for traffic distribution.
 VM Scale Sets for auto-scaling.
 Managed Disks, NSGs, and Auto Healing.
Design:
LB --> VMSS (Zone-redundant) --> App
Use Azure Front Door or Traffic Manager for global scaling.

18. Site-to-Site vs Point-to-Site VPN


Feature Site-to-Site Point-to-Site
Use case Org to Azure Individual device to Azure
Authentication IPsec/IKE Cert/Password
Persistent Yes No (on-demand)
Typical Users Corporate networks Developers or remote users

19. Storage account tiers


Azure supports:
 Hot: Frequently accessed data.
 Cool: Infrequent access (30+ days).
 Archive: Rarely accessed (180+ days), lowest cost but high retrieval time.
You can set this on blob level too.

20. Alert when app/VM is down


Steps:
1. Go to Azure Monitor > Alerts.
2. Create a new alert rule.
3. Scope: Choose VM or App Insights.
4. Condition: Use Heartbeat or Availability Test failure.
5. Action Group: Add Email/SMS/Webhook.
Enable App Insights for deeper app monitoring.

Multi-stage YAML pipeline in Azure DevOps with environment approvals for deployment
to Dev, QA, UAT, and Prod, where the Prod stage is gated by a manual approval (e.g., by a
senior architect).

✅ Full YAML Example (azure-pipelines.yml)


trigger:
branches:
include:
- main
stages:
- stage: Dev
displayName: "Deploy to Dev"
jobs:
- job: DevJob
steps:
- script: echo "Deploying to Dev environment"
- task: AzureCLI@2
inputs:
azureSubscription: 'My-Service-Connection'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
echo "Deploying infrastructure to Dev"
# terraform apply or az deployment here

- stage: QA
displayName: "Deploy to QA"
dependsOn: Dev
condition: succeeded()
jobs:
- job: QAJob
steps:
- script: echo "Deploying to QA environment"

- stage: UAT
displayName: "Deploy to UAT"
dependsOn: QA
condition: succeeded()
jobs:
- job: UATJob
steps:
- script: echo "Deploying to UAT environment"

- stage: Prod
displayName: "Deploy to Prod"
dependsOn: UAT
condition: succeeded()
jobs:
- deployment: ProdDeploy
displayName: "Manual Approval and Deploy to Prod"
environment: 'Prod' # Linked environment with approval in Azure DevOps UI
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to Prod environment"

🔐 To Set Manual Approval for Prod:


1. Go to Azure DevOps > Pipelines > Environments.
2. Create or select the Prod environment.
3. Click on Approvals and Checks.
4. Add a Pre-deployment approval, and assign it to your senior architect or approver
group.

This pipeline will:


 Automatically deploy from Dev → QA → UAT.
 Wait for a manual approval before deploying to Prod.

Same above yaml in Broken reusable:-

Broken down into reusable templates for each stage:

✅ Project Structure
azure-pipelines.yml # Main pipeline
/.pipelines
├── dev-stage.yml
├── qa-stage.yml
├── uat-stage.yml
└── prod-stage.yml

📄 1. Main Pipeline (azure-pipelines.yml)


trigger:
branches:
include:
- main

stages:
- template: .pipelines/dev-stage.yml
- template: .pipelines/qa-stage.yml
parameters:
dependsOnStage: Dev
- template: .pipelines/uat-stage.yml
parameters:
dependsOnStage: QA
- template: .pipelines/prod-stage.yml
parameters:
dependsOnStage: UAT

📄 2. Dev Stage Template (.pipelines/dev-stage.yml)


parameters:
stageName: Dev

stages:
- stage: ${{ parameters.stageName }}
displayName: "Deploy to ${{ parameters.stageName }}"
jobs:
- job: ${{ parameters.stageName }}Job
steps:
- script: echo "Deploying to ${{ parameters.stageName }} environment"

📄 3. QA Stage Template (.pipelines/qa-stage.yml)


parameters:
stageName: QA
dependsOnStage: ""

stages:
- stage: ${{ parameters.stageName }}
displayName: "Deploy to ${{ parameters.stageName }}"
dependsOn: ${{ parameters.dependsOnStage }}
condition: succeeded()
jobs:
- job: ${{ parameters.stageName }}Job
steps:
- script: echo "Deploying to ${{ parameters.stageName }} environment"

📄 4. UAT Stage Template (.pipelines/uat-stage.yml)


parameters:
stageName: UAT
dependsOnStage: ""

stages:
- stage: ${{ parameters.stageName }}
displayName: "Deploy to ${{ parameters.stageName }}"
dependsOn: ${{ parameters.dependsOnStage }}
condition: succeeded()
jobs:
- job: ${{ parameters.stageName }}Job
steps:
- script: echo "Deploying to ${{ parameters.stageName }} environment"

📄 5. Prod Stage Template (.pipelines/prod-stage.yml)


parameters:
stageName: Prod
dependsOnStage: ""

stages:
- stage: ${{ parameters.stageName }}
displayName: "Deploy to ${{ parameters.stageName }}"
dependsOn: ${{ parameters.dependsOnStage }}
condition: succeeded()
jobs:
- deployment: DeployToProd
displayName: "Manual Approval & Deploy to Prod"
environment: 'Prod' # Environment in Azure DevOps with approval set
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to ${{ parameters.stageName }} environment"

✅ Benefits of This Setup


 Clean structure – each stage has its own YAML.
 Reusability – if you add new environments or clone logic, it’s modular.
 Scalability – can inject parameters like deployment scripts, artifacts, service
connections.

You might also like