0% found this document useful (0 votes)
109 views5 pages

Priya AIML Resumee

Priyanka R is a Gen AI/ML Engineer and Data Scientist with 10 years of experience in data analysis, machine learning, and business intelligence, proficient in Python, R, and big data technologies. She has a strong background in developing AI-driven solutions, including predictive models, AI chatbots, and automated reporting processes, utilizing tools such as TensorFlow, PyTorch, and Power BI. Her professional experience includes roles at Prudential, Kaiser Permanente, and Sutherland, where she implemented machine learning algorithms, data engineering, and visualization techniques across various industries.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views5 pages

Priya AIML Resumee

Priyanka R is a Gen AI/ML Engineer and Data Scientist with 10 years of experience in data analysis, machine learning, and business intelligence, proficient in Python, R, and big data technologies. She has a strong background in developing AI-driven solutions, including predictive models, AI chatbots, and automated reporting processes, utilizing tools such as TensorFlow, PyTorch, and Power BI. Her professional experience includes roles at Prudential, Kaiser Permanente, and Sutherland, where she implemented machine learning algorithms, data engineering, and visualization techniques across various industries.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Gen AI/ML Engineer/ Data Scientist

Name: Priyanka R
Email: priyankar230494@gmail.com
Phone: +1 302-257-8509
LinkedIn: https://www.linkedin.com/in/priyanka-reddy-genai/

SUMMARY PROFESSIONAL:
 10 years of expertise in data analysis, data science, and business intelligence, specializing in statistical
analysis, machine learning, and data engineering to drive actionable insights and AI-driven solutions across
industries.
 Advanced programming skills in Python and R, utilizing libraries like Pandas, NumPy, and SciKit-Learn for
data collection, analysis, and model development in Jupyter Notebooks.
 Extensive experience with big data technologies, including Kafka, Spark SQL, PySpark, Hadoop, HDFS,
MapReduce, and Hive, for processing large-scale datasets.
 Strong background in machine learning algorithms, including regression, clustering, and customer analytics,
to develop predictive models that optimize business strategies.
 Conducted complex statistical modeling and data analysis using Python, R, SQL, Hadoop, Spark, and Hive,
integrating open-source libraries such as Scikit-learn, TensorFlow, Keras, and Matplotlib.
 Built AI chatbots using machine learning and NLP techniques, leveraging Transformer models for sentiment
analysis, named entity recognition, and text classification.
 Automated reporting processes using SQL on Google Cloud Platform (GCP) and integrated BI platforms to
enhance data accessibility and streamline reporting.
 Deployed AI-driven solutions leveraging OpenAI GPT, cognitive search technologies, and vector databases to
enhance data accessibility and intelligence.
 Expert in data manipulation and preparation, applying techniques such as data parsing, regex processing,
reindexing, reshaping, and merging datasets for analysis.
 Implemented machine learning techniques such as LDA, Naïve Bayes, Random Forests, Decision Trees, SVM,
Clustering, and PCA, with a solid understanding of Recommender Systems.
 Worked with NoSQL databases, including HBase, Cassandra, and MongoDB, for efficient storage and
retrieval of large datasets.
 Designed and trained deep learning models using PyTorch and TensorFlow, applying them to various AI-
driven applications.
 Expertise in developing and maintaining reports and dashboards using Power BI Desktop, Power BI Service,
Power BI Pro, Power BI Mobile, and Tableau. Skilled in creating Power Pivot reports, Power View, and Power
Map visualizations for executive-level decision-making.
 Expertise with cloud platforms and data tools like Databricks, AWS S3, Azure Data Lake, and Azure SQL
databases for data extraction, transformation, and storage. Proficient in managing ETL processes and
handling large datasets.
 Expert in data visualization tools, including Tableau, Power BI, and Matplotlib, with extensive experience in
big data tools like Hadoop, Spark, and Hive.
 Streamlined recurring reports using SQL and Python, integrating them with BI platforms for enhanced
accessibility.
 Experience with version control and configuration management tools, such as CVS, SVN, and VSS, ensuring
code integrity and efficient project collaboration.
 Expertise in creating automated reports, data wrangling, data visualization, and utilizing machine learning
techniques for predictive analytics and business forecasting.
 Extensive experience with Software Development Life Cycle (SDLC) methodologies including Agile, Scrum,
UML, Waterfall, and Project Management Methodologies. Experienced with tools like JIRA for tracking
stories, epics, and sprints.

TECHNICAL SKILLS:
Machine Learning : Linear Regression, Logistic Regression, Naive Bayes, Decision Trees, Random
Forest, SVM, K-Means, KNN, XGBoost, AdaBoost, PCA, Feature Engineering,
Hyperparameter Tuning, NLP, Reinforcement Learning
Deep Learning : CNN, RNN, LSTMs, GRU, Autoencoders, GAN, BERT, GPT-3/4, OpenCV,
Cognitive Search.
ML Frameworks : TensorFlow, Keras, PyTorch, Scikit-learn, LightGBM, XGBoost, FastAI, MLflow,
NLTK, SpaCy
Big Data : Spark, PySpark, Hadoop, HDFS, MapReduce, Hive, Kafka, Airflow, Databricks
Programming : Python, R, SQL, JavaScript, Java, Scala, C++, Bash, Shell.
Data Engineering & ETL : NiFi, Informatica, SSIS, AWS Glue, Azure Data Factory, GCP Dataflow, Talend,
Apache Beam
BI & Visualization : Tableau, Power BI, Looker, Matplotlib, Seaborn, Plotly, Excel, Dash
Cloud Platforms : AWS (S3, Lambda, SageMaker, Redshift, DynamoDB, EC2, EMR), GCP
(BigQuery, Vertex AI, AutoML), Azure (ML, Data Lake, Synapse, CosmosDB)
MLOps & DevOps : Docker, Kubernetes, Jenkins, GitHub, GitLab CI/CD, MLflow, FastAPI, Flask,
Databases : SQL Server, Oracle, Mongo, DB, Snowflake

PROFESSIONAL EXPERIENCE:

Client: Prudential, Newark, NJ Oct 2023– Present


Role: Gen AI/ML Engineer/Data Scientist

Responsibilities:
 Used pandas, NumPy, seaborn, SciPy, matplotlib, scikit-learn, NLTK in Python for developing various
machine learning algorithms.
 Application of various Artificial Intelligence (AI)/ Machine Learning algorithms and statistical modeling like
decision trees, test, natural language processing, supervised and unsupervised, regression models.
 Delivered solutions that utilized Machine learning, NLP, or other forms of AI (Artificial Intelligence) solutions
like machine vision. Data visualization using Elastic search, Kibana and Logstash in python.
 Wrangled data, worked on large datasets (acquired data and cleaned the data), analyzed trends by making
visualizations using matplotlib using Python.
 Applied deep learning models, including CNNs and RNNs, for image and speech recognition, incorporating
attention mechanisms for enhanced NLP accuracy.
 Integrated Hugging Face models into the LangChain ecosystem for seamless AI-powered applications.
 Collaborated with data engineers, wrote, and optimized SQL queries to perform data extraction from SQL
tables.
 Manage Data Storage and processing pipelines in GCP for serving AI and ML services in Production,
development and testing using SQL, Spark, Python and AI vm
 Built machine learning pipelines in TensorFlow, Scikit-Learn, and PyTorch, and implemented predictive
models for various applications.
 Performed Data Cleaning, features scaling, features engineering using pandas and NumPy packages in
python and build models using deep learning frameworks.
 Implemented asynchronous capabilities in FAST API to handle high-concurrency scenarios, resulting in
improved system performance and responsiveness.
 Utilized AWS SageMaker to efficiently build, train, and deploy machine learning models.
 Experience in NLTK implementing machine learning models, including NLP, computer vision, and
image/audio generation, using Hugging Face's Transformers library.
 Conducted research on language models, developing generative AI experiences with Google Vertex AI and
employed RAG techniques for enhanced data analysis.
 Automated data visualization and reporting pipelines using Power BI, Tableau, and Matplotlib.
 Implemented application using forms of AI (Artificial Intelligence) like machine learning algorithms and
statistical modeling like Decision Tree, Text Analytics, Sentiment Analysis, Naive Bayes, Logistic Regression
and Linear Regression using Python to determine the accuracy rate of each model.
 Handled importing data from various data sources, performed transformations using Hive, Map Reduce, and
loaded data into HDFS.
 experience in building and scaling Generative AI Applications, specifically around frameworks like Langchain,
AzureML
 Applied deep learning techniques such as CNNs and RNNs for stock price predictions, GANs for synthetic
image generation, and Autoencoders for Movie Recommender Systems.
 Proficiency in fine-tuning deep learning models using Hugging Face APIs and frameworks for custom AI
solutions.
 Automated ML pipelines with Vertex AI, integrating CI/CD practices for efficient model deployment and
monitoring.
 Extracted data from HDFS using Hive, Presto and performed data analysis using Spark with Scala, pySpark,
Redshift, and feature selection and created nonparametric models in Spark.
 Developed LangChain-based AI applications with real-time Hugging Face model integration, optimizing for
scalability and flexibility.
 Experience in working with large language models (LLMs), such as GPT, BLOOM, and LLama, for text
generation, summarization, and chatbot applications.
 Used OpenAI GPT-3 to automate personalized marketing content generation, leading to increased
engagement and conversion rates.
 Created and deployed RESTful web services using FastAPI, integrating data from PostgreSQL, DynamoDB,
and S3 buckets.
 Implemented HuggingFace Endpoint to enable serverless inference for large models like Llama, Mistral, and
Falcon within LangChain applications.
 Implemented text classification, tagging, and entity recognition using NLTK, SpaCy, and Stanford NLP, with
expertise in word embeddings, BERT, and transformer models.
 Hands-on experience with Spark (Core, SQL, Streaming), PySpark, Hadoop, HDFS, and Kafka for scalable data
processing.
 Developed deep learning projects using CNNs and RNNs, including movie recommendation systems and
stock price predictions.
 Conducted research on language models, developing generative AI experiences with Google Vertex AI and
employed RAG techniques for enhanced data analysis.
 Hands-on with TensorFlow, Scikit-Learn, and PyTorch to develop and deploy end-to-end ML pipelines with
MLOps practices.
 Implemented interactive Power BI dashboards for data insights and transitioned complex data from Excel for
streamlined analytics.

Environment: Python, Scala, TensorFlow, PyTorch, Scikit-Learn, Hugging Face FastAPI, OpenAI GPT-3, Google
Vertex AI, AzureML, AWS (S3, DynamoDB, EC2, Lambda), GCP, Redshift, HDFS, Hadoop, Hive, Kafka, Presto,
Pandas, NumPy, Matplotlib, Seaborn, Power BI, Tableau, Kibana, ElasticSearch, NLTK, SpaCy, NLP, LLaMA, RAG,
LangChain, Docker, Kubernetes, CI/CD, FastAPI, ETL, MapReduce, Pandas/NumPy, CNNs, RNNs, GANs, Stock
LangChain, MLOps.

Client: kaiser Permanente, Washington, D.C May 2022 – May 2023


Role: AI/ML Engineer

Responsibilities:
 Developed scripts in Python using Scikit-learn, spaCy, Transformers, data science and Tensor flow machine
learning libraries.
 Tested with various Machine Learning algorithms such as Support Vector Machine, Random Forest, Trees
with Boost concluded Decision Trees as champion model.
 Hands on design and implementation of AI, machine learning algorithms using Python and R
 Collaborated on Keras-based computer vision projects, including image classification and object detection.
 Used Eclipse, PyCharm, XCode, PyScripter and Sublime Text while developing different applications in
python.
 Performed Data Cleaning, features scaling, features engineering using pandas and NumPy packages in
python and build models using deep learning frameworks.
 Used GIT for version control and Jenkins for Continuous Integration and Continuous Deployment (CI/CD).
 Experienced in using Vertex AI's integration with TensorFlow Extended (TFX) for creating robust and scalable
ML pipelines.
 Worked with deep neural networks and Convolutional Neural Networks (CNN’s) and Recurrent Neural
networks (RNN’s).
 Designed and developed natural language processing (NLP) pipelines to enhance search relevance and user
experience by integrating semantic search capabilities.
 Proficient in using TensorFlow, PyTorch, and Keras to build and train generative models, optimizing model
performance through hyperparameter tuning and regularization techniques.
 Performed data cleaning and feature selection using MLlib package in PySpark and working with deep
learning frameworks such as Caffe, Keras etc.
 Developed custom chat models with Chat Hugging Face, enabling structured prompt engineering with
specialized token formats.
 Participated in the configuration of an on-premises Power BI gateway to refresh datasets of Power BI
reports and dashboards.
 Wrote standard & complex T-SQL Queries to perform data validation and graph validation to make sure test
results matched back to expected results based on business requirements.
 Clean data and process third-party spending data into maneuverable deliverables within a specific format
with Excel macros and python libraries such as NumPy, SciPy, and Matplotlib.
 Developed and maintained multiple Power BI dashboards/reports and content packs.

Environment: Python, R, TensorFlow, PyTorch, Keras, Scikit-learn, MLlib (PySpark), spaCy, NumPy, Pandas, SciPy,
Matplotlib, Seaborn, Power BI, SQL, T-SQL, Access, Jenkins, Git, Vertex AI, TensorFlow, Hugging Face, FastAPI,
GCP, AWS, Azure, Eclipse, PyCharm, XCode, CI/CD, NLP Pipelines, CNNs, RNNs, Power BI.

Client: Prescient, Hyderabad, India Jan 2018 – Nov 2021


Role: Data Scientist/ Data Analyst

Responsibilities:
 Performed Data Analysis, Data Profiling, and worked on data quality rules to ensure accurate data reporting.
 Utilized Python in Jupyter Notebook for data profiling after pulling files from AWS S3 to assess and analyze
data quality.
 Pulled Data Quality metrics as part of Retail Data Transformation for ongoing data quality monitoring
using Tableau and Power BI.
 Analyzed and profiled data using Python and SQL for consistent data quality monitoring purposes.
 Performed data analysis by using Hive to retrieve the data from Hadoop cluster, SQL to retrieve data from
Oracle database and used ETL for data transformation.
 Published the dashboard, reports on the Power BI Services so that the end-users can view the data.
 Participated in Data Acquisition with Data Engineer team to extract historical and real-time data by using
Hadoop MapReduce and HDFS.
 Performed Data Enrichment jobs to deal missing value, to normalize data, and to select features by using
HiveQL.
 Utilized Databricks notebooks to develop and execute complex Spark SQL queries for efficient data analysis
and reporting.
 Involved in developing Stored Procedures to test ETL Load per batch and provided performance optimized
solution to eliminate duplicate records.
 Used Redshift, PostgreSQL, SQL Workbench, and Snowflake querying tools to pull and profile data for better
data understanding.
 Used Power BI Power Pivot to develop data analysis prototype and used Power View and Power Map to
visualize reports.
 Developed MapReduce pipeline for feature extraction using Hive and Pig.
 moved data across multiple platforms and technologies, showcasing adaptability to varying data
environments.
 Pulled data from AWS S3 buckets to build data quality checks and perform data profiling for data quality
monitoring.
 Created user stories and tracked issues using JIRA, ensuring alignment with agile methodologies and sprint
goals.
 Assisted in building backend pipelines for Tableau dashboards to ensure seamless data connectivity,
including modifying old Python scripts from Github.

Environment: Python, Jupyter Notebook, AWS S3, Tableau, Power BI, SQL, Hive, Hadoop, Oracle, ETL, Power BI,
Hadoop MapReduce, HDFS, HiveQL, Databricks Notebooks, Spark SQL, Redshift, PostgreSQL, Snowflake,
MapReduce, Pig, JIRA, Agile, GitHub.

Client: Sutherland, Hyderabad, India May 2014 - Nov 2017


Role: Bigdata Engineer

Responsibilities:
 Developed Spark streaming model which gets transactional data as input from multiple sources and create
multiple batches and later processed for already trained fraud detection model and error records.
 Extensive knowledge in Data transformations, Mapping, Cleansing, Monitoring, Debugging, performance
tuning and troubleshooting Hadoop clusters.
 Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Python and Scala.
 Developed DDLs and DMLs scripts in SQL and HQL for creating tables and analyzing the data
in RDBMS and Hive.
 Researched and recommended suitable technology stack for Hadoop migration considering current
enterprise architecture.
 Worked on ETL process to clean and load large data extracted from several websites (JSON/ CSV files) to the
SQL server.
 Collecting and aggregating large amounts of log data and staging data in HDFS for further analysis. Used
Sqoop to transfer data between relational databases and Hadoop.
 Wrote Hive Queries for analyzing data in Hive warehouse using Hive Query Language (HQL).
 Analyzed data stored in S3 buckets using SQL, PySpark and stored the processes data in Redshift and
validated data sets by implementing Spark components
 Worked as ETL developer and Tableau developer and widely involved in Designing, Development Debugging
of ETL mappings using Informatica designer tool

Environment: Spark, Hive, Python, HDFS, Sqoop, Tableau, HBase, Scala, MySQL, Impala, AWS, S3, EC2, Redshift,
Tableau, Informatica

Education:
 Bachelor’s in computer science engineering in Computer Science engineering from VITS India- 2014
 Masters in IT management in Information technology from Dallas Baptist university Dallas TX - 2023

You might also like