0% found this document useful (0 votes)
51 views8 pages

Smit Resume Vsoft

Uploaded by

mani06019
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views8 pages

Smit Resume Vsoft

Uploaded by

mani06019
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Smit Vadvala

Summary
• Experienced Sr. DevOps/Cloud Engineer with 9 years of expertise in AWS and Azure Cloud, Linux
administration, and automation.
• Experienced in Linux Administration, Configuration Management, Continuous Integration (CI), Continuous
Deployment, Release Management and Cloud Implementations.
• Hands on Experience in using configuration management tools like Cloud Formation, Terraform and
Ansible.
• Proficient in AWS Cloud platform and its features which includes EC2, VPC, EBS, AMI, SNS, RDS, Cloud
Watch, Cloud Trail, Cloud Formation AWS Configuration, Autoscaling, CloudFront, IAM, S3, Route 53.
• Implemented Amazon EC2 setting up instances, virtual private cloud (VPCs), and security groups.
• Set-up databases in AWS using RDS, storage using S3 bucket and configuring instance backups to S3
bucket.
• Design EC2 instance architecture to meet high availability application architecture and security parameters.
• Created AWS instances via Jenkins with EC2 plugin and integrated nodes in Chef via knife command line
utility.
• Worked with IAM service creating new IAM users & groups, defining roles and policies and Identity
providers.
• Involved in Configuration Automation and Centralized Management with Terraform, Ansible and also
Implemented Ansible to manage all existing servers and automate the Build/configuration of new servers.
• Container management uses Docker by writing Docker files and setting up the automated build on Docker
Hub and installing and configuring Kubernetes.
• Hands on experience in Azure Development, worked on Azure web application, App services, Azure
storage, Azure SQL Database, Virtual Machines, Fabric controller, Azure AD, Azure search and notification hub.
• Worked on Jenkins by configuring and maintaining for the purpose of continuous integration (CI) and for
End-to-End automation for all build and deployments.
• Created functions and assigned roles in AWS Lambda to run python scripts and AWS Lambda using java to
perform event driven processing.
• Good knowledge and experience in using Elasticsearch, Kibana, CloudWatch, Nagios, Splunk, Prometheus
and Grafana for logging and monitoring.
• Created and maintained the Shell deployment scripts for TC server/ TomCat web application server.
• Worked on OpenShift for container management and to enhance container platform multi-tenancy.
• Created and maintained Branches, labels, workspaces on GIT, Participated in merging of source code.
• Skilled at setting-up Baselines, Branching, Merging and Automation Processes using Shell and Batch
Scripts and also supporting the developers in writing configuration-specs.
• Installed, configured, modified, test & deploy applications on Apache Web server, Tomcat, JBoss App
Servers.
• Good at scripting languages like Python, Bash and configuration management tools
Terraform/Ansible/Cloud Formation Engine and Web Service like AWS.
• Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our
systems end to end using Beats.
• Strong ability to troubleshoot any issues generated while building, deploying and in production support.
• Facilitated data migration from Broadridge DMS (Data Management System) to Amazon DynamoDB.
• Experience in monitoring System/Application Logs of server using Wily, Splunk & Kibana to detect Prod
issues.
• Experience working with Docker, Kubernetes, Docker Swarm and Micro Services.
• Created Datadog Dashboards for various Datadog tools and alerted application teams based on the
escalation matrix.
• Scanning servers using Qualys and Nessus for security vulnerabilities.
• Experience in applying security patches and updating RedHat Linux OS version using RedHat Satellite
server.
• Prioritization and ticket queue management using ServiceNow.
• Experienced with the understanding of the principles and best practices of Software Configuration
Management (SCM) processes, which include Compiling, Packaging, Deploying and Application configurations.

Technical Skills
Operating Systems
Linux (CentOS), Unix, Windows
Version Control Tools
Subversion (SVN), GIT, GitLab, GitHub, Code Commit, Bitbucket
Build Tools
Jenkins, Maven, Gradle, Ant
Languages
Python, C, Bash, Shell, JavaScript
Databases
MySQL, PostgreSQL, NoSQL, Redshift
Application Servers
Apache, Tomcat, WebLogic, WebSphere
Other Tools
Docker, Kubernetes, Ansible, Chef, Puppet, Elasticsearch, Logstash, Kibana, Grafana, Nagios, Splunk, EDIFECS, ServiceNow,
Qualys, SonarQube, Burp Suite
Cloud
AWS, EC2, VPC, EBS, SNS, RDS, ELB, EBS, IAM, DynamoDB, CloudWatch, CloudFormation, S3, Autoscaling, CloudTrail,
Lambda, Amazon Connect and Azure

Professional Experience:

Client: New Management Services LLC (Remote) July 2023 - Present


Role: Sr. DevOps Engineer (Remote)
Responsibilities:
• Participated in SCM tool evaluation selection and implementation. Proactively identified,
implemented process
and other operational improvements for build/release/deployment.
• Worked closely with multiple development and test teams to provide process design,
management, and support
for source code control, code compilation, change management, and production release management.
Driving continuous improvement by focusing on increased automation, continuous integration, and continuous
test principles.
• Streamlined and coordinated Configuration/Build/Release/Deployment/Process/Environment
management
across all the products in our Applications.
• Used Docker in Environment variables, Configuration files, Option types and Strings &
integers.
• Experience with Linux systems, virtualization in a large-scale environment, experience with
Linux Containers
(LXC) and Docker.
• Used ANT, Puppet/ Chef Scripts to build the application and deploy.
• Resolved update, merge and password authentication issues in Gitlab and Jira.
• Created puppet manifests and modules to automate system operations.
• Extensive experience in Application Deployments and Environment configuration using Chef,
Puppet, Ansible.
• Automated SQL Scripts Deployment to Staging and production Databases.
• Experience in installation and implementation of AppDynamics on all Prod and Non-Prod.
• Responsible for build and deployment automation using VM Ware ESX, Docker containers and
Hudson.
• Experience in Writing Python modules for Ansible customizations.
• Experience executing the CI Jenkins build job for both Android and iOS application builds. Using
GIT (Stash)
tool as the source code repositories for all projects and Artifactory for all builds (ipa/apk) release repository.
• Conducted regular deployments for all the applications in QA and STAGING on Android and IOS
platforms.
• Experience with PAAS/IAAS development: using Angular JS, Docker, Ansible.
• Design and implemented large scale pub-sub message queues using Apache Kafka
• Deployed Kafka manager for getting better insights into our Kafka clusters
• Implemented Kubernetes to deploy scale, load balance, scale and manage docker containers with
multiple name
spaced versions.
• Managed Kubernetes charts using Helm, created reproducible builds of the Kubernetes
applications, managed
Kubernetes manifest files and managed releases of Helm packages
• Prepared automated scripts for Queue manager setup.
• Updated and migrated Queue manager attributes from MQ V5.3 to V6.0
• Experienced in creating source code repository using Bit bucket.
• Involved in DevOps processes for build and deploy systems.
• Worked with batch team to schedule and monitor batch jobs on weekly basis.
• Worked as Admin on JIRA tool. Customized the dashboard based on team's requirement.
• Added users, implemented Security and added new projects on JIRA.

Environment: Docker, Kubernetes, Helm charts, Kafka, Linux, RabbitMQ, Kinesis, SQS, Ansible, Git version
Control, Maven, Jenkins, Gitlab, Unix/Linux, Shell scripting.

• Client: Comcast | March 2023 - July 2023


• Sr. DevOps Engineer -Remote
• Experience handling DevOps related responsibilities including but not limited to creating CI/CD pipelines
from scratch using Jenkins for Comcast-Peacock Project. Setting up their secured private AWS cloud and integrating with
Snowflake for student enrollment related data movement.
• Automation: Managing server machines with different roles and purpose by heavy use of Terraform and
deployment tool and automation with custom made scripts.
• Network: Responsible for handling network issues like handling DDOS attacks in co-ordination with
multiple DDOS mitigation partners.
• Designed and Implemented CI/CD from Scratch and applied security as code principles across the board.
• Written Jenkins files from scratch using groovy to automate the Jenkins CI/CD process from packaging the
application, analyzing the code, scanning for security vulnerabilities, creating Docker images, and deploying the war files to
applications which are running on containers/VMs.
• Performed L2 & L3 level Full Life-cycle triage for all events on production servers including incident
logging, troubleshooting, management of production crisis events.
• Automated and streamlined operations and processes. Developed and implemented Software Release
Management strategies for various applications according to the agile process. Worked with different development teams and
multiple simultaneous software releases.
• Implemented a CI/CD pipeline with Docker, Jenkins (TFS Plugin installed), Team Foundation Server
(TFS), GitHub and Azure Container Service, whenever a new TFS/GitHub branch gets started, Jenkins, our Continuous
Integration (CI) server, automatically attempts to build a new Docker container from it.
• Used helm charts to perform deployments on EKS cluster.
• Ensure recovery of systems from infrastructure or service failures to mitigate disruptions.
• On-premises and Cloud Infrastructure orchestration, automation, and operations
• Creating network architectures based on the requests and writing Jenkins pipeline files using groovy to
automate the Jenkins CI/CD process from packaging the application, creating docker image and deploying the Docker images
to Kubernetes to run the application.
• Used Jenkins, Build forge for continuous integration and deployment into Tomcat Application Server
• Worked with RedHat OpenShift container platform for Docker and Kubernetes, used Kubernetes to manage
containerized applications using its nodes, confirms, node-selector, services, and deployed application containers as Pods.
• Setup Datadog monitoring across different servers and AWS services.
• Created Datadog dashboards for various applications and monitored Realtime and historical metrics.
• Created system alerts using various Datadog tools and alerted application teams based on the escalation
matrix.
• Writing scripts to automate the infrastructure creation from end to end using tools such as terraform and
Ansible.
• Deployed Java/J2EE applications to Apache Tomcat, JBoss, WebLogic web servers
• Worked with GitHub to manage source code repositories and performed branching, merging, and tagging
depending on requirement.
• Managed Kubernetes charts using Helm and created reproducible builds of the Kubernetes applications,
managed Kubernetes manifest files and Managed releases of Helm packages.
• Administered Jenkins for Continuous Integration and deployment into Tomcat/Web Sphere Application
Servers.
• Continues production deployments to AEM and Nodejs services and automated deployment through release
management tools. Optimize performance and create stability.
• Written Templates for AWS infrastructure as a code using Terraform to build staging and production
environments.
• Maintained and coordinated environment configuration, controls, code integrity, and code conflict
resolution. Used Atlassian JIRA as tracking tool in this project.
• Used Azure Kubernetes Service to deploy a managed Kubernetes cluster in Azure and created an AKS
cluster in the Azure portal, with the Azure CLI, also used template driven deployment options such as Resource Manager
templates and terraform.
• Developed Unix Scripts for the purpose of manual deployment of the code to different environments and e-
mail the team when the build is completed.
• Worked on the transition project that involved migration activities to Maven from b to standardize the build
across all the applications. Used JFROG tool to manage the repository in Maven and used the same to share the snapshots
and releases of internal projects.
• Technologies Used: Azure, AWS, Terraform, OpenShift, Jenkins, Ansible, Docker, Kubernetes, GitHub,
Jira, Linux, Python, Redshift, JavaScript, node.js, Prometheus, DataDog, JFrog, Maven, Ant, Helm, Tomcat/WebSphere


• Client: LexisNexis Risks Solutions - Boca Raton, FL | May 2019 - February 2023
• Sr. AWS DevOps Engineer - Remote
• Experience in DevOps Engineer for project teams that involved different development teams and multiple
simultaneous software releases. Developed and implemented Software Release Management strategies for various
applications according to the agile process. Experience in Analysis, Design and Focused on automation, integration
monitoring and configuration management.
• Expertise in Amazon AWS Cloud Administration which includes services like EC2, S3, Glacier, EBS, VPC,
A, AMI, SNS, RDS, IAM, Route 53, Auto scaling, Cloud Front, Cloud Watch, Cloud Trail, Cloud Formation, OPS Work,
Security Groups.
• Hands on Experience in using configuration management tools like Cloud Formation, Terraform & Ansible.
• Write Terraform scripts to provision AWS resources EC2, EFS, ECR, ECS, ELB, IAM roles & S3.
• Configured the Kubernetes provider with Terraform which is used to interact with resources supported by
Kubernetes to create several services such as Config Map, Namespace, Volume, Auto scaler, etc.
• Creating the automated build and deployment process for application, re-engineering setup for better user
experience and leading up to building a Continuous Integration system.
• Experience in implementing AWS Lambda to run servers without managing them and to trigger run code by
S3 and SNS.
• Architected full year model to migrate from Legacy system (Oracle) to AWS Cloud using RDS (Postgres)
as database.
• Have migrated MariaDB to AWS RDS MariaDB with myqldump.
• Implemented a distributed messaging queue to integrate with Cassandra using Kafka and Zookeeper.
• Configure monitoring and logging tools Splunk, Wily, Kibana using Python scripts and integrating it with
internal servers to generate and automate reports to the management.
• Troubleshooting of Docker based applications. AWS Cloud management and Puppet automation.
• Worked with Ansible playbooks for virtual and physical instance provisioning, configuration management,
patching and software deployment on AWS environments through automated tools, Ansible / custom pipeline.
• Automated deployments, scaling, and operations of application containers across clusters of hosts, provided
container-centric infrastructure by Kubernetes.
• Writing Python scripts to automate Database migration processes. Knowledge on Massive Data migration
expertise from SQL to PostgreSQL.
• Prepared a plan for user communication to switch from ADFS to Okta SSO.
• Experience in developing API’s using Java Spring Boot, ECS, ALB for customized Admin tool to manage
Amazon Connect configuration.
• Utilized ServiceNow ticketing and knowledgebase systems in accordance with change management
process.
• Using Ansible to setup/teardown of ELK stack (ElasticSearch, Logstash, Kibana)
• Network team and Data base team to troubleshoot the errors and bugs in the Production server.
• Designed and developed ETL jobs to extract data from Salesforce replica and load it in data mart in
Redshift.
• Developed JavaScript functions for client-side validations.
• Used Elastic Log Search, Log Stash and Kibana (ELK stack) for centralized logging and analytics in the
continuous delivery pipeline to store logs and metrics into S3 bucket using AWS Lambda function.
• Setup Datadog monitoring across different servers and AWS services.
• Experience in Configuring and deploying the code through Web Application servers Apache Tomcat, JBoss,
WebLogic and WebSphere.
• Experience managing Kafka clusters both on Windows and Linux environment.
• Generated workflows through Apache Airflow, then Apache Oozie for scheduling the jobs which controls
large data transformations.
• Integrated Microsoft Azure MFA with CyberArk, VPN, Oracle access manager, VDI and other third-party
tools
• Responsible to designing and deploying new ELK clusters (ElasticSearch, Logstash, Kibana, Kafka)
• Experience in using Jenkins for Continuous Integration and Sonar jobs for java code quality.
• Established infrastructure and service monitoring using Prometheus and Grafana.
• Automation of a release process, writing scripts on Python and bash
• Developed new RESTful API services that work as a middleware between our application and third-party
APIs that use node.js.
• Used web application scanning tool Burp Suite to check the Vulnerabilities and propose changes to the
existing code.
• Worked on updating the SonarQube to latest version and moving the datasets from older to newer version.
• Improved Auto Quote application by designing and developing it using Eclipse, HTML, Servlets and
JavaScript
• Implemented cloud services IAAS, PAAS and SaaS which includes OpenStack, Docker and OpenShift.
• Experience with application/data migration to AWS and also good knowledge of Chef.
• Technologies Used: AWS, Terraform, Lambda, Amazon Connect, Azure, Jenkins, Ansible, Docker,
Kubernetes, GIT, WebSphere, Solaris, Jira, Linux, WebLogic, Python, ServiceNow, ELK Stack, SonarQube, Redshift,
JavaScript, node.js, Burp Suite, Prometheus, Airflow, Kafka, DataDog, Kafka clusters and Grafana.

• Client: Horizon Blue - Newark, NJ | May 2018 - April 2019
• Cloud Engineer
• Designed processes and provisioning infrastructure architecture for HD Vest applications and used AWS
infrastructure as code for various environments. Cloud formation for entire AWS resources, IAM roles, Virtual private cloud,
EC2 instances and S3 buckets.
• Experience with Build and Release management according to enterprise guidelines, used continuous
integration tool like Terraform, Ansible manage system configuration.
• Utilized Cloud Formation, Terraform & Ansible by creating DevOps processes for consistent and reliable
deployment methodology.
• Led the requirements gathering of the Amazon Connect Migration for the customers contact center
migration from Genesys Pure Engage
• Experience in using Gentran Application Integrator (AI)/ Netbeans/ MapBuilder (EDIFECS) for developing
inbound and outbound maps.
• Researched and development on both Gentran and Edifecs security administration.
• Developed data dictionaries for both inbound and outbound transactions like 837, 834, 270, 271 etc for the
HIPAA standards in Edifecs.
• Installed and configured an automated tool Ansible that included the installation and configuration of the
Ansible playbooks, agent nodes and an admin control workstation.
• Architect, implement, then maintain a completely scripted SaaS infrastructure with continuous integration
for a Linux environment running in AWS.
• Worked with cloud providers and APIs for Amazon EC2, S3, VPC with Cloud Sigma (EU) and GFS
storage.
• Extensively worked with Scheduling, deploying, managing container replicas onto a node using Kubernetes
and experienced in creating Kubernetes clusters work with Helm charts running on the same cluster resources.
• Experienced in continuous integration technologies with Jenkins. Designed and created multiple
deployment strategies using Continuous Integration (CI) and Continuous Development (CD).
• Used JSP, JavaScript, jQuery, Ajax, CSS3 and HTML5 as data and presentation layer technology.
• Created and wrote Python, Shell scripts, Bash for setting up baselines, branching, merging and automation
processes across the environments using SCM tools like GIT, Subversion on Linux, and windows platforms.
• Extensively worked on creating and deleting Dynamic views for developers as requested by user.
• Wrote various data normalization jobs for new data ingested into Redshift.
• Setup Docker Swarm and Kubernetes cluster for Docker Container Management.
• Experience building microservices and deploying them into Kubernetes cluster as well as Docker Swarm.
• Installed Pivotal Cloud Foundry (PCF) on instances to manage the containers created by PCF.
• Have used Cloud foundry command line interface to push an app with a new or updated Docker image.
Used Cloud Foundry (CF) for the Docker image to create containers for the app.
• Automated deployment from GitLab-ci to OpenShift.
• Monitored and troubleshoot DEV, QA & Production Environments.
• Closely collaborate with security architects in developing cloud security frameworks for the enterprise.
• Develop, maintain, and report on key Cloud Security metrics- both as a program and on an individual basis.
• Architected and designed the data flow for the collapse of four legacy Data Warehouses into an AWS Data
Lake.
• Carried deployments and builds on various environments using Continuous Integration tool.
• Provided assistance for interaction with Backend and NoSQL databases.
• Technologies Used: AWS, EDIFECS, Clear Case, JavaScript, ANT, Shell Scripts, XML, UNIX, GitLab,
Redshift, Jenkins, Bash, Puppet, MySQL and NoSQL, Kubernetes, Helm Charts, Oracle, Postgres

• Client: Wells Fargo Home Mortgage - Des Moines, IA | July 2016 - April 2018
• DevOps Engineer
• Installation and configuration of Operating System such as Linux. Maintained many cluster nodes using
CRM. AWS and Virtualization experts for field deployment for various customers. Create branches for various code line
deliveries and merge with the main code base after every PRODUCTION release.
• Deploy and monitor scalable infrastructure on Amazon Web Services (AWS) & Configuration management
using Terraform, Ansible.
• Continuous Delivery is being enabled through Deployment into several environments of Test, QA, Stress
and Production using Jenkins.
• Used Ansible Playbooks to set up Continuous Delivery Pipeline. Deployed micro services, including
provisioning AWS environments using Ansible Playbooks.
• Developed user interfaces using JSP, HTML, CSS, JavaScript, jQuery, and Ajax with JSON
• Responsible for design and maintenance of the Subversion/GIT Repositories and access control strategies.
• Worked on tracking tools to trigger issues and update defects logs using Jira.
• Design and implement solutions for monitoring, scaling, performance improvement and configuration
management of systems running SaaS applications.
• Able to create scripts for system administration and AWS using languages such as Python and BASH.
• Created views and meta-data, performed mergers, and executed builds on a pool of dedicated build
machines.
• Used JSON schema to define table and column mapping from S3 data to Redshift.
• Experience with container-based deployments using Docker, working with Docker images, Docker Hub and
Docker-registries and Kubernetes.
• Configured auto scaling in customized VPC based on elastic load balancer (ELB) traffic and using ELB
health check in order to trigger auto scaling actions.
• Node.js back-end development for Microservices.
• Involved in maintenance and enhancement activities including Production release.
• Created Cloud Formation template stacks to automate for building new VPCs using JSON files.
• Provided periodic feedback of status and scheduling issues to the management.
• Technologies Used: AWS, Jenkins, Terraform, GIT, Ansible, Subversion, Redshift, Solaris, Jira, JavaScript,
Linux, WebSphere, WebLogic, Python, Node.js, Docker, Kubernetes, Shell scripts, Cloud Formation, Auto scaling, ELB.

• Client: BNY Mellon - Lake Mary, FL | July 2014 - June 2016
• Build & Release Engineer
• Responsibilities:
• Participated in the release cycle of the product which involves environments like Development QA UAT
and Production.
• Responsible for configuration, maintenance and troubleshooting of .com Project.
• Utilized Cloud Formation & Puppet by creating DevOps processes for consistent and reliable deployment
methodology.
• Configured Puppet to perform automated deployments. Expert in User Management and Plugin
Management for Puppet.
• Installed and configured an automated tool Puppet that included the installation and configuration of the
Puppet master, agent nodes and an admin control workstation.
• Worked with cloud providers and APIs for Amazon EC2, S3, VPC.
• Modified ANT scripts to build the JAR's, Class files, WAR files and EAR files from VOB's.
• Developed utilities for developers to check the checkouts, elements modified based on project and branch.
• Implemented a Continuous Delivery framework using Jenkins, Maven & Nexus in Linux environment.
• Experienced in continuous integration technologies with Jenkins. Designed and created multiple
deployment strategies using Continuous Integration (CI) and Continuous Development (CD).
• Created and wrote Shell scripts, Bash, Python and Power Shell for setting up baselines, branching, merging,
and automation processes across the environments using SCM tools like GIT, Subversion (SVN), Stash and TFS on Linux
and windows platforms.
• Developed UNIX and Perl Scripts for the purpose of manual deployment of the code to different
environments and e-mail the team when the build is completed.
• Used Perl/Shell to automate the build and deployment Process.
• Deployed application modules to WAS based clusters via ND admin console.
• Extensively worked on creating and deleting Dynamic views for developers as requested by user.
• Created deployment request tickets in Bugzilla for the deploying the code to Production.
• Responsible for Building and releasing packages for testing.
• Created AWS Launch configurations based on customized AMI and use this launch configuration to
configure auto scaling groups and Implemented AWS solutions using EC2, S3, RDS, Dynamo DB, Route53, EBS, Elastic
Load Balancer, Auto scaling groups.
• Carried deployments and builds on various environments using Continuous Integration tool.
• Provided assistance for interaction with Backend and NoSQL databases.
• Evaluated system performance and validated NoSQL solutions.
• Configured MySQL server in Microsoft Azure and established connection between Server and MySQL
Client.
• Created automated system to create VM's, storage accounts, Network Interfaces, etc. in Azure
• Experienced in monitoring/managing Microsoft Azure Cloud and VM Ware infrastructure.
• Created puppet manifests and modules to automate system operations.
• Executed the DB Scripts (DML and DDL) which have dependencies on the code on Oracle DB.
Documented the deployment process (Migration Doc) of code to production on an Excel Sheet.
Technologies Used: Azure, Clear Case, ANT, Perl/Shell Scripts, Bugzilla, XML, UNIX, Oracle, WebSphere, Jenkins, Maven,
Bash, Puppet, AWS, MySQL, and NoSQL

Education
• Bachelor of Science in Business and Information Management, 2015
• Seminole State College of Florida | Orlando, FL

You might also like