Name:Abdul Naser
Email:
Sr. AWS Cloud Engineer
Professional Summary:
• Overall, I have 8 years of experience in Information Technology and as a AWS Cloud DevOps
and Systems engineer in Linux environments such as Red Hat Linux, CentOS, Ubuntu,
Windows environments, and production support.
• Experience in setting up the Chef Workstation, Chef repo as well as Chef nodes.
• Installation and up gradation of Packages and Patches configuration management, version
control, service pack and reviewing connectivity issues regarding security problems.
• Rebuilding and modernization of high load classified advert platform on Amazon Web Services
(AWS).
• Automated the periodic rehydration of EC2 instances using Lambda and CloudWatch.
• Experience in several AWS services EC2, EBS, SNS, SQS, ROUTE 53, ELB, LAMBDA, S3,
Cloud Watch, Auto scaling configurations etc.
• Hands-on experience on AWS storage services S3, S3 Glacier, AWS Storage Gateway,
CloudFront (CDN).
• Hands Knowledge on Active MQ to connect multiple clients and servers to communicate
• Installed, configured, and implemented the RAID technologies using various tools like VxVMand
Solaris Volume Manager.
• Experience in Server monitoring, capacity planning, application monitoring with the help of
Nagios, Puppet, Splunk.
• Experience in setting up the entire Chef infrastructure from scratch.
• Management of library versions and deprecated code, Design and sequencing of automated
builds & test runs and Troubleshooting expertise – build failures due to dependencies, tests, etc.
• Knowledge on Red Hat Satellite Server with custom repositories to provide a stable management
solution for the Linux environment and hands on knowledge on wildfly.
• Private Cloud Environment - Leveraging AWS and Puppet to rapidly provision internal computer
systems for various clients.
• Develop Puppet modules and role/profiles for installation and configuration of software for
required for various applications/blueprints.
• Experience in Designing, Architecting, and implementing scalable cloud-based web applications
using AWS and GCP.
• Wrote python scripts to manage AWS resources from API calls using BOTO SDK and worked
with AWS CLI.
• Create clusters in Google Cloud and manage the clusters using Kubernetes(k8s). Using
Jenkins to deploy code to Google Cloud, create new namespaces, creating docker images and
pushing them to container registry of Google Cloud.
• Experience in managing workflow orchestration service, to create workflows that span across
clouds and on-premises data centers using Google Cloud Composer.
• Wrote Ansible playbooks to launch AWS instances and used Ansible to manage web
applications, configuration files, used mount points and packages.
• Setup Puppet Master, client and wrote scripts to deploy applications on Dev, QA, production
environment.
• Experience on analyzing data on the order of billions of rows, using a SQL-like syntax by using
Google Big Query
• Maintained high Availability cluster and standalone server environments and refined automation
component with scripting and configuration management (Ansible).
• Hands-on experience in Microsoft Azure Cloud Services (PaaS & IaaS), Storage, Web Apps,
Active Directory, Application Insights, Internet of Things (IoT), Azure Search, Key Vault,
Visual Studio Online (VSO) and SQL Azure.
• Strong experience in automating Vulnerability Management patching and CI/CD using Chef and
other tools like GitLab, Jenkins, and AWS/OpenStack.
• In depth Knowledge of AWS cloud service like Compute, Network, Storage, and Identity &
access management.
• Hands-on Experience in configuration of Network architecture on AWS with VPC, Subnets,
Internet gateway, NAT, Route table.
• Responsible for ensuring Systems & Network Security, maintaining performance, and setting up
monitoring using CloudWatch and Nagios.
• Experience in working on version controller tools like GitHub (GIT), Subversion (SVN) and
software builds tools like Apache Maven, Apache Ant.
• Extensively worked on CI/CD pipeline for code deployment by engaging different tools (Git,
Jenkins, CodePipeline) in the process right from developer code check-in to Production
deployment.
• Expertise in requirement gathering, analysis, solution design, development, implementation,
setup, testing, customization, maintenance, support, and data migration.
• Good communication with presentation and technical writing skills.
TECHNICAL SKILLS:
Host OS Windows server 2012|2016 R2, Linux– Ubuntu, Red hat, CentOS,
Debian & Alpine Linux, WildFly.
Configuration Management Tools Ansible, Chef, Puppet, Vagrant
Version Control System GIT, GITHUB, SVN and Bitbucket, perforce
Build Tools and IDE MAVEN, ANT, Eclipse, Gradle, Jenkins, HillPro, Hudson, Bamboo.
CI/CD tools Jenkins/Hudson, TFS, Chef, Puppet, Ansible, Terraform.
AWS IAM, EC2, S3, VPC, Route 53, RDS, RDS, EMR, Cloud Trail, Lambda,
CloudWatch, CloudFormation, Elastic Beanstalk, and OpsWorks, AWS
config, Cognito, CloudFront
Languages Shell Scripting, Python, Bash, Java
Databases SQL Server, MySQL, MongoDB, MS Access 2000, MS SQL 2000, MySQL
and Oracle 9i (TOAD)
Containers Docker, S3, Glacier, Docker, Kubernetes, Habitat and OpenShift
Cloud Technologies Amazon Web Services, Microsoft Azure, Google Cloud Platform
Bug Tracking Tools ServiceNow, JIRA, SDM-12, BugZilla, HP Quality Center and Rational
ClearQuest
Monitoring Tools Splunk, Data dog, Nagios.
Others Helm, ARM templates and CloudFormation
PROFESSIONAL EXPERIENCE
Client: CNSI, VA September 2021 –
Present
Role: Sr. Cloud Engineer
Responsibilities:
• Created scripts in Python (Boto) which integrated with Amazon API to control instance
operations.
• Designed, built, and coordinate an automated build & release CI/CD process using Gitlab,
Jenkins and Puppet on hybrid IT infrastructure.
• Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS,
Amazon Elastic Load Balancing, Amazon SWF, Amazon SQS, and other services of
the AWS infrastructure.
• Running build jobs and integration tests on Jenkins Master/Slave configuration.
• Managed Servers on the Amazon Web Services (AWS) platform instances using Puppet
configuration management.
• Involved in maintaining the reliability, availability, and performance of Amazon Elastic Compute
Cloud (AmazonEC2) instances.
• Conduct systems design, feasibility and cost studies and recommend cost-effective cloud
solutions such as Amazon Web Services (AWS).
• Responsible for monitoring the AWS resources using Cloud Watch and application resources
using Nagios.
• Created AWS Multi-Factor Authentication (MFA) for instance RDP/SSH logon, worked with
teams to lockdown security groups.
• Experience in implementing, administering, and monitoring tools Splunk, Nagios, Console,
Netcool, Graphite and building, deploying Java and SOA applications and troubleshooting the
build and deploy.
• Installed and Configured - NFS, NIS, DNS, Mail Server, Apache Web Server on Linux, and
Solaris.
• Administered and Implemented Cl tools Hudson/Jenkins, Puppet, Chef, Octopus Deploy and
Anthill Pro for automated builds.
• Expertise in Querying RDBMS such as Oracle, PL/SQL, and MY SQL by using SQL for Data
integrity.
• Ability in development and execution of XML, Shell Scripts and Perl Scripts.
• Expertise in TCP/IP, AD, DNS, DHCP, WAN, LAN, SMTP.
• Worked on F5 Load balancing, Local Traffic managers (LTM), Global traffic manager (GTM)
of series 6400, 6800, 3400,5100, 3600 and 3DNS migration to GTM.
• Experience working Data Center's managing Servers, SAN and NAS devices like HP, Cisco,
Brocade, EMC and HDS devices.
• Coordinated with the Offshore and Onshore teams for Production Releases.
• Experience in all stages of software development cycle thorough with software methodologies like
Waterfall, UP, Scrum and Agile.
• Played a key role in automating the deployments on AWS using GitHub, Terraform, Puppet,
Chef and Jenkins.
• Implement AWS Code Pipeline and Created Cloud formation JSON templates in Terraform for
infrastructure as code.
• Created Kafka and RabbitMQ clusters for POC in an OpenShift Environment for a messaging
solution.
• Automated applications and MySQL container deployment in Docker using Python and monitor
them using Splunk.
• Built and deployed CI/CD pipelines
• Orchestrated CI/CD processes by responding to Git triggers, human input, and dependency
chains and environment setup.Node.js, python, Linux, VMWare. Gitlab, GitHub, JIRA, REST,
HTTP, Docker, Jenkins, Quay, OpenShift, Kubernetes, CI/CD, Microsoft SQL, Oracle PL/SQL.
• Used CI/CD tools Jenkins, Git/Gitlabs, Jira and Docker registry/daemon for configuration
management and automation using Ansible.
Environment: AWS (EC2,VPC,ELB,S3,EBS,RDS,Route53, ELB, Cloud Watch,
CloudFormation, AWS Auto Scaling, Lambda, OpenShift, Elastic BeanStalk), JIRA, Confluence, Kafka,
VMWare, RHEL, Gitlabs, Docker, SUSE11,VMware,Windows,ILM,APS, Perl, Python, SAP,OpenShift,
Cloud Watch, CloudFormation,Hadoop, VM ESX, VM vSphere, ILM, SAN, Python, Bash, Shell.
Client: Flexera, IL November 2019 – Aug 2021
Role: DevOps Engineer
Responsibilities:
• Created multiple VPC’s and public, private subnets as per requirement and distributed them as
groups into various availability zones of the VPC.
• Created and launched EC2 instances using AMI’s of Linux, Ubuntu, RHEL, and Windows and
wrote shell scripts to bootstrap instances.
• Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier
for storage and backup on AWS.
• Good Experience in architecting and configuring secure cloud VPC using private and public
networks through subnets in AWS/GCP
• Migrated 9 microservices to Google Cloud Platform from skava and have one more big
release planned with 4 more microservices.
• Working on the migration of mobile application from skava to cloud (Google Cloud) by making
the chunk of code to microservices.
• Involved in CI/CD process using GIT, Nexus, Jenkins job creation, Maven builds and Create
Docker image and use the docker image to deploy in gcloud clusters.
• Extensive Knowledge and hands-on experience implementing PaaS, IaaS, SaaS style delivery
models inside the Enterprise (Data center) and in Public Clouds using like AWS, Google
Cloud, and Kubernetes etc.
• Amazon IAM service enabled to grant permissions and resources to users. Managed roles and
permissions of users with the help of AWS IAM.
• Designing and implementing fully automated server build management, monitoring and
deployment by using Technologies like Puppet.
• Deployed Puppet, Puppet dashboard for configuration management to existing infrastructure.
• Initiating alarms in CloudWatch service for monitoring the server's performance, CPU
Utilization, disk usage etc. to take recommended actions for better performance.
• Configured AWS Multi Factor Authentication in IAM to implement 2 step authentication of user's
access using Google Authenticator and AWS Virtual MFA.
• Included security groups, network ACLs, Internet Gateways, and Elastic IP's to ensure a safe
area for organization in AWS public cloud.
• Utilized Amazon Route53 to manage DNS zones and assign public DNS names to elastic load
balancers IP’s.
• Use Splunk to monitor and visualize the Kubernetes cluster CPU and memory usage.
• Used IAM for creating roles, users, groups and implemented MFA to provide additional security
to AWS account and its resources.
• Responsible for application Build & Release process which includes Code Compilation,
Packaging, Security Scanning and code quality scanning, Deployment Methodology and
Application Configurations.
• Defining Release Process & Policy for projects early in SDLC and responsible for source code
build, analysis and deploy configuration.
• Extensively worked on Jenkins for continuous integration and for End-to-End automation for all
build and deployments. Implement CI-CD tools Upgrade, Plugin Management, Backup,
Restore, LDAP and SSL setup.
• Experienced in creating RDS instances to serve data through servers for responding to
requests.
• Created snapshots to take backups of the volumes and images to store launch configurations
of the EC2 instances.
• Trained staff on effective use of Jenkins, Docker and GitLab.
• Worked on Amazon AWS, configuring, launching Linux server instances for Splunk
deployment.
• Develop Confluence pages and manage JIRA tickets for ongoing projects.
Environment: AWS (EC2,VPC,ELB,S3,EBS,RDS,Route53, ELB, Cloud Watch,
CloudFormation, AWS Auto Scaling, Lambda, OpenShift, Elastic BeanStalk), JIRA, Confluence, Kafka,
VMWare, RHEL, Gitlabs, Docker, SUSE11,VMware,Windows,ILM,APS, Perl, Python, SAP,OpenShift,
Cloud Watch, CloudFormation,Hadoop, VM ESX, VM vSphere, ILM, SAN, Python, Bash, Shell.
Client: Barclays, NJ. July 2017 – Oct 2019
Role: Cloud DevOps Engineer
Responsibilities:
• Experience with Chef Enterprise Hosted as well as On-Premises, Installed Workstation,
Bootstrapped Nodes, Wrote Recipes and Cookbooks and uploaded them to Chef-server,
managed On-site Applications/ Services/Packages using Chef as well as AWS for
EC2/S3/Route53 & ELB with Chef Cookbooks.
• Extensive experience in developing and maintaining build, deployment scripts for test, Staging
and Production environments using ANT, Maven, Shell, and Perl Scripts.
• Maintained high Availability clustered and standalone server environments and refined
automation component with scripting and configuration management (Ansible).
• Hands-on experience in Microsoft Azure Cloud Services (PaaS & IaaS), Storage, Web Apps,
Active Directory, Application Insights, Internet of Things (IoT), Azure Search, Key Vault, Visual
Studio Online (VSO) and SQL Azure.
• Strong experience in automating Vulnerability Management patching and CI/CD using Chef and
other tools like GitLab, Jenkins, and AWS/Open Stack.
• Hands on experience with Chef, Ansible, RunDeck, AWS, Ruby, Vagrant, Pivotal Tracker,
Bash and middleware tools.
• Experienced in AWS Cloud platform with features EC2, VPC, ELB, Auto-Scaling, Security
Groups, IAM, EBS, AMI, RDS, 53, SNS, SQS, CloudWatch, CloudFormation.
• Building & configuring Red Hat Linux systems over the network, implementing automated tasks
through crontab, resolving tickets according to the priority basis.
• Work with our customers to help design systems, architectures, and automation solutions and
systems leveraging our IaaS,PaaS and XaaS partners and software partners.
• Acted as third level support for Exchange and Active Directory end-user issues.
• Create role-based access and SAML based SSO authentication for Splunk.
• Creation of SAN File System on Red Hat Linux using multipath configuration.
• Implemented automation of Devops tools (Chef and puppet).
• Created projects, VPC's, Subnetwork's, GKE Clusters for environments QA3, QA9 and prod
using Terraform Created projects, VPC's, Subnetwork's, GKE Clusters for environments.
• Worked on Jenkins file with multiple stages like checkout a branch, building the application,
testing, pushing the image into GCR, Deploying to QA3, Deploying to QA9, Acceptance testing
and finally Deploying to Prod.
• Perform troubleshooting and monitoring of the Linux server on AWS
using Zabbix, Nagios and Splunk.
• Management and Administration of AWS Services CLI, EC2, VPC, S3, ELB Glacier, Route
53, CloudTrail, IAM, and Trusted Advisor services.
• Created automated pipelines in AWS Code Pipeline to deploy Docker containers in
AWS ECS using services like CloudFormation, CodeBuild, CodeDeploy, S3 and puppet.
• Worked on JIRA for defect/issues logging & tracking and documented all my work
using CONFLUENCE.
• Integrated services like GitHub, AWS Code Pipeline, Jenkins, and AWS Elastic Beanstalk to
create a deployment pipeline.
• Good Experience in architecting and configuring secure cloud VPC using private and public
networks through subnets in AWS/GCP.
• Used source tools such as SVN, GIT to save and manage software code base and revisions in a
repository.
Environment: Red Hat Linux (RHEL 4/5), Splunk, WebSphere, Logical Volume Manager, Global File
System, Red Hat Cluster Servers, OpenShift, Oracle, MySQL, Chef, Puppet, SVN, GIT, AWS, DNS, NIS,
NFS, Apache, Tomcat.
Client: Citizens Bank, FL March 2016 - June 2017
Role: AWS DevOps Engineer
Responsibilities:
• Perform troubleshooting and monitoring of the Linux server on AWS
using Zabbix, Nagios and Splunk.
• Management and Administration of AWS Services CLI, EC2, VPC, S3, ELB Glacier, Route
53, CloudTrail, IAM, and Trusted Advisor services.
• Created automated pipelines in AWS Code Pipeline to deploy Docker containers in
AWS ECS using services like CloudFormation, CodeBuild, CodeDeploy, S3 and puppet.
• Worked on JIRA for defect/issues logging & tracking and documented all my work
using CONFLUENCE.
• Expertise in creating VM Templates, cloning and managing Snapshots.
• Remote monitoring and management of server hardware.
• Package management using RPM, YUM and UP2DATE in Red Hat Linux.
• Troubleshooted file transfer issues across a range of platforms (Windows/Linux/MVS) and
protocols (FTP, SFTP, Connect Direct, HTTPS, CFT). Completed approximately 120 Service
Now tickets per month, translating to a high level of customer satisfaction.
• Designing and implementing fully automated server build management, monitoring and
deployment by using Technologies like Puppet.
• Deployed Puppet, Puppet dashboard for configuration management to existing infrastructure.
• Initiating alarms in CloudWatch service for monitoring the server's performance, CPU Utilization,
disk usage etc. to take recommended actions for better performance.
• Configured AWS Multi Factor Authentication in IAM to implement 2 step authentication of user's
access using Google Authenticator and AWS Virtual MFA.
• Included security groups, network ACLs, Internet Gateways, and Elastic IP's to ensure a
safe area for organization in AWS public cloud.
• Writing UNIX shell scripts to automate the jobs and scheduling cron jobs for job automation using
commands with Crontab.
• Wrote Ansible Playbooks with Python SSH as the Wrapper to Manage Configurations
of AWS Nodes and Test Playbooks on AWS instances using Python. Experience with Puppet to
manage enterprise Puppet deployments more easily
• Design AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure
successful deployment of Web applications and database templates
• Created scripts in Python which integrated with Amazon API to control instance operations.
• Coordinate/assist developers with establishing and applying appropriate branching, labeling
/naming conventions using GIT source control.
• Communicated daily with external vendors and the internal business delivering a faster resolution
of file transfer issues. Ability to write Shell & Perl scripting.
• Worked on Grub, PXE boot, Kickstart, Packages, YUM, RPMs, LVM, Boot from SAN, file
system configuration.
• Maintained and administered Active Directory Servers, including daily monitoring, troubleshooting
and performance analysis.
• Troubleshooting and configuration of Local and Network based printers.
• Worked with various vendors for setting up the testing-lab.
Environment: RHEL 6.x/5.x, SUSE 11, VMware, Windows, Perl, Python, WAS 5/6/7/8, SAP, Logical
Volume Manager, Hadoop, Tibco Spotfire, VM ESX, VM vSphere Prepared, ILM, SAN, Python, C/C++,
Bash, Shell.
Client: CloudEnd Platform Ltd., India April 2015 - Jan 2016
Role: Systems/Network Engineer
Responsibilities:
• Completed Hardware and Software builds for new Server deployments, ensuring adherence to
documented build procedures.
• Installed and maintained RJ45, CAT6 and/or optical fiber cables across the Data Center and
various MDFs around the lot.
• Installed and maintained AC/DC power PDU's (Sentry- Server Tech) with Temperature
&Humidity sensors, connections Master/Slave, configuring all the PDU's in the Sentry
Management application.
• Assisted in the operation, and maintenance of all electrical, mechanical, and HVAC equipment
within the Data Center/Facility.
• Facilitated the growth of the IT Infrastructure Service support and participated in the development
of its products and services.
• Audited changes for IP's, Gateway, ports, etc.
• Installed and administered UNIX, RedHat Linux and Windows NT servers.
• Install and Configure Linux with Apache, Oracle 8I and PHP (LAMP Project).
• Environment: Ant, Java, Maven, Subversion, Hudson, Linux, Solaris, Web Sphere, Shell scripting,
Nexus.