0% found this document useful (0 votes)
10 views3 pages

DONGIRI NAVEEN Adf CV

Dongiri Naveen Kumar is an Azure Data Factory Developer with 4 years of experience in ETL processes and SQL Server, including 3 years as an Azure Data Engineer. He has expertise in designing data pipelines, migrating SSIS packages to Azure, and working with various Azure services such as Azure Blob Storage and Azure SQL Database. His project experience includes developing data solutions for clients like CITI and in the media sector, utilizing technologies like PySpark and Azure Databricks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

DONGIRI NAVEEN Adf CV

Dongiri Naveen Kumar is an Azure Data Factory Developer with 4 years of experience in ETL processes and SQL Server, including 3 years as an Azure Data Engineer. He has expertise in designing data pipelines, migrating SSIS packages to Azure, and working with various Azure services such as Azure Blob Storage and Azure SQL Database. His project experience includes developing data solutions for clients like CITI and in the media sector, utilizing technologies like PySpark and Azure Databricks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

DONGIRI NAVEEN KUMAR

EMAIL ID
4 Years Experience as ETL + Azure Data Factory Developer
EXPERIENCE SUMMARY:

Having 4 years of experience in designing and implementing IT solution delivery, support for
diverse solutions and technical platforms, out of which includes 4 years of experience in SQL
Server, SSIS and 3 years of relevant experience as an Azure Data Engineer. I have
knowledge on Pyspark.
Strong Azure BI Development Experience (Azure Blob Storage, Azure Data Lake, Azure SQL)
Server, Azure Data Factory
Very good experience on implementing data pipelines using Azure Data Factory, working with
different source and sync, linked service, data sets and data flow.
Have experience migrating SSIS packages from on-premises to Azure Data Factory.
Good understanding to store and read data from cloud storages like Azure Blob Storage and data
lake.
Worked on Different Control Flow and Data Flow Tasks and Transformations
Implemented check points to manage the package failures.
Scheduled the ETL packages at desired frequencies using SQL Server Agent jobs.
Working with various transformations such as Lookup, Conditional Split, Derived column,
merge join etc.
Working on tasks such as Dataflow task, Execute SQL, Execute Package, File system and
Containers like for loop, for each loop in Integration services development.
Hands-on experience in creating, deploying, troubleshooting, scheduling, and monitoring SSIS
Packages.
Good knowledge in SQL Server Fundamentals and T-SQL Objects.
Good understanding of SQL concepts like DML, DDL operations, functions, stored procedures,
and joins.
Extensive experience in writing and debugging stored procedures, views, functions, and triggers.
A good knowledge of indexes, both clustered and non-clustered.
Connecting data sources, importing data and transforming data for Business intelligence.
Strong desire in learning new technologies and methodologies

WORK EXPERIENCE:

Worked as Azure Data Factory Developer in LONG TAIL WEB SERVICES PVT LTD.

EDUCATIONAL_QUALIFICATION

B.Tech (Electrical and Electronics Engineering)JYOTHIS MATHI INSTITUTE OF TECHNOLOGY AND SCIENCE JITS (JNTUH)

TECHNICAL ENVIRONMENT:

Languages Python, SQL, Pyspark,


Azure, Azure Functions, Azure Data Factory, Data Flows
Technologies
Databases SQL, Oracle, Azure SQL

RAD / IDE Visual Studio 2015/2017/2019

Tools MS Office suite, Git, SSMS.


AZ-900 - Microsoft Azure Fundamental
Certification DP-900 - Microsoft Azure Fundamental

PROJECT #1

PROJECT DETAILS:

Title : Liability Management


Client : CITI- UK.
Language/Platform : ASP.NET, ADO.NET, C#.NET, Oracle, PySpark,Azure Databricks
Role : Azure Data Factory Developer
Description : Agency & Trust require an online portal to support their liability.
management product offering. DebtX Online portal will support the launch of offers. And also, it will collect
the acceptances/instructions, reporting to the clients; Issuers/Dealers/Custodians/Lawyers can access
documents, announcements and news from the online portal in a controlled manner.
The DebtX Admin portal will help to create offers. It allows the user to create online application forms to collect.
user (custodians/investors) inputs. A Deal design tool will be provided to dynamically set-up deal. Using
Deal Design tool, the DebtX admin team user can design the deal brochure (information) page by page and
present it to the clients (custodians and investors).
Here we have layouts to design each row of each page, controls to display and get input from the client, few
More tool icons help to finish up the designing activity efficiently. The designing deal on this screen will
Reach the approvers and once all approvers have completed their approval activity, the deal will be ready for launch.
After all procedures, the DebtX admin team will launch the deal. Launch of deal info will be communicated to the respective.
custodians and direct investors.
The admin portal will also provide functionality for managing Users, Roles, and Page management for both.
admin and online portal.

Roles and Responsibilities:

Developed pipelines in Azure Data Factory to fetch data from different sources and
loaded it into Azure synapse analytics.
Extensively involved in writing transforming data in PySpark and pushing into ADLS.
Worked on Azure DevOps to set up infrastructure and to build and deploy applications.
Implemented Incremental load strategy for loading on daily basis.
Handled the releases for every deployment and the unit/integration testing.
Responsible for creating Linked services, datasets, pipelines in Azure data factory.
Created stored procedures using TSQL.
Responsible for creating copy activity, lookup activity, Metadata activity,
Monitor the pipelines and report to fix the issues
Creating data flows to transform the data to Azure using Azure Data Factory.
Developed complete framework of the project end to end and met customer’s deadlines
Proactively

PROJECT #2

Title : Data lake Data engineering


Client : Communication and Media
Language/Platform : Azure Data Factory, Azure Databricks, Azure SQL Database, PySpark, Azure
Databricks
Role : Azure Data Factory Developer

Description:
Data lake Technology Platform is a modern technology foundation delivered in a secure hosted ecosystem that
integrates client data, industry-specific data feeds, and the power of Media's unique capabilities in data analytics
and advanced AI to deliver Enhanced opportunities throughout customer lifecycle.

Roles and Responsibilities:

Involved in ETL workflow using Azure ADF, Databricks with PySpark as per business requirements.
which includes extraction of data from Relation Database and loading it to Azure Sql DB.
Extensively involved in writing transforming data in PySpark and pushing into ADLS Gen2.
Storing the resulted data into Azure SQL DB for PowerBI and Spotfire Consumption.
Experienced in data migration from On-Prem to Azure Cloud with Databricks using Spark api.
Attend daily scrum calls and update the ADO user stories on a daily basis.
Monitor the pipeline jobs and report/fix them if any issues

(Dongiri Naveen)

You might also like