ATEEJA MOHAMMED
Senior Azure Data Engineering Professional
Experienced Azure Data Engineer specializing in ETL/ELT processes and Azure Data Factory (ADF) integration
with SAAS applications. Proficient in extracting structured databases using JSON, Pyspark, and Databricks,
with expertise in Azure Synapse Analytics. Skilled in designing and executing intricate data pipelines,
managing diverse datasets, and enhancing Azure-based data systems for optimal performance.
athiya.ateeja@gmail.com www.linkedin.com/in/athiya-a5688b142 +91 8106334709
PROFILE SUMMARY CORE COMPETENCIES
Technically inclined Azure Data Engineering Professional with over 5 years of Strategic Planning Data Engineering
experience in building, analysing, enhancing, and supporting best-in-class
azure data engineering concepts. Altruistic, Strong acumen in understanding Azure Services Azure Data Bricks
business requirements, defining the project scope, eliciting & analysing the
requirement, documenting the requirement, communicating the final
requirement to stakeholders and process modelling. Azure Data Factory Azure Synapse
SQL, Python, Pyspark & Core Java Data Warehousing
WORK EXPERIENCE
Associate Project Data Cleansing, Mapping & Transformation Team Coordination
Cognizant India Pvt Ltd, Hyderabad
07/2022 – Present
Key Deliverables
Assure that data is cleansed, mapped, transformed, and otherwise optimised MAJOR PROJECTS
for storage and use according to business and technical requirements.
MDIP (Cloud Migration Project) (10/2023 – Current)
Produce ADF Pipeline, Explore Pipeline Design, Prepare Pyspark Notebooks,
anheuser-busch (06/2023-09/2023)
Help Teammates, Bug Fixing, Deployment OF ADF Pipeline and work on
MundiPh-Commercial Marketing A (08/2022 – 05/2023)
Release Pipeline.
UDL - Unilever Data Lake (08/2021 - 07/2022)
Develop and maintain innovative Azure solutions.
DryFruit POS (07/2020 - 03/2021)
Solution design using Microsoft Azure services and other tools.
The ability to automate tasks and deploy production standard code (with unit
testing, continuous integration, versioning, etc.). EDUCATION & CREDENTIALS
Load transformed data into storage and reporting structures in destinations,
including data warehouses, high-speed indexes, real-time reporting systems
and analytics applications. MCA from Vaageswari Eng College (JNTUH) in 2016 with 79%.
BSC (MPCS) from Govt Degree & Women’s College in 2011 with 69%.
Associate Consultant
MPC from Shivani Girls &Jr. College in 2008 with 75.4%.
Capgemini
05/2021 – 06/2022 SSC from Singareni Col High School in 2006 with 76.5%.
Key Deliverables
Evaluated the project requirements based on the client's specifications. PERSONAL DETAILS
Created requirement specifications and detailed processes.
Languages Known: English, Hindi and Telugu
Assessed recommended solutions and researched alternatives.
Location: India
Entrusted in analysis Of Requirements, Analyzed Files or Dataset
Shouldered with the responsibility of ADF Pipeline Creation and created Scala
Notebooks in Azure Data Bricks.
Software Engineer
Vita Technologies
01/2018 – 04/2021
Key Deliverables
Developed and directed software system validation and testing methods.
Directed our software programming initiatives. Delivered with code
Implementation in Java Design the pages in Angular.
Successfully done Bug Fixing and requirement Gathering of Design.
Oversaw the development of documentation.
Worked closely with clients and cross-functional departments to
communicate project statuses and proposals.
ANNEXURE – MAJOR PROJECTS
MDIP(Cloud Migration Project)
Client: Pepsico (10/2023-current) Role: Azure Data Engineer Description: The aim of the project is to migrate the data from Teradata to Azure synapse
workspace. As part of the project, we used to dump the data into different layers in ADLS Gen2 then Databricks will Process the data and build next
layers of data lake. ADF will copy the required data to synapse using polybase. Synapse managed tables will be consumed using JDBC Connectivity.
There are 2 phases in our project phase1 and phase2
In phase1 we are creating Presto tables to querying and analyzing tables ADLS path to ADLS GOLD path for business requirement
In phase2 we are doing actual development
In synapse we are creating external, internal tables and stored procedure there will be micro strategy tables and reporting tables has context menu
Anheuser-busch (06/2023-09/2023)
Modernization/Migration of Legacy Teradata Data Warehouse to Modern Databricks Unity Catalog
Used medallion architecture to design and implement the modernized data bricks unity Catalog.
Worked with a variety of data sources, including Fixed-width flat files, Oracle tables, MSSQL tables, XML, and CSV files, to load data into Databricks Unity
Catalog.
Modified legacy queries to modern Spark SQL queries.
Performed Transformations for removing duplicates, handling null values, and converting to the correct data type.
Transformed data as per business logic to create actionable insights for decision-makers.
Created schema and modernised bronze, silver, and gold tables.
Automated the data pipeline using Databricks Workflows Jobs.
Performed unit testing of modernized tables to ensure completeness and equivalency with legacy Teradata data warehouse.
Deployed the modernized Unity Catalog code to production using a GitHub Actions workflow.
MundiPh-Commercial Marketing A (08/2022 – 05/2023)
Mundi Pharma is a UK Pharmacy Company Which Generates Reports to the Customer for Better Business Growth
It Generally Releases Reports for 11 data Sources; from Them, 6 are API Source Systems, and the rest are SFTP & FTP Source Systems
Initially Fetching Data from Source AS IS to ADLS Location After doing some Transformations Store the Dataset in another Location
Generating Reports Based On Transformed File.
UDL - Unilever Data Lake (08/2021 - 07/2022)
As the internal and external data grows exponentially, the traditional methods of storing enormous volumes of data in a database or data warehouse are
no longer sustainable or cost-effective.
Data Lake is a new concept that emerged at the back of BIG data, where various types of data are stored in their native form within a system.
Irrespective of its type (structured or unstructured) or volume. The most significant advantage is that there is no constraint on the size or variety of the
data we can hydrate into the data lake.
Combined with a storage facility and high-performing cloud processing capabilities, Data Lake becomes a desirable and cost-effective solution for vast
enterprises like Unilever to leverage it for the foundation capability to build complex data, information, and analytics solutions.
DryFruit POS (07/2020 - 03/2021)
Used Technologies--Angular, Angular Material, HTML, Css, Bootstrap, Type Script DryFruit POS is a Billing software. It’s a Complete software solution to
manage DryFruit billing.
It provides you with complete access to your Dry fruit shop business through a simplified platform. It allows you to speed up the billing process and
create appealing invoices to attract more customers.
Moreover, it will enable you to manage your inventory so that you can avoid unnecessary losses.