Contact:
Email: pradeeprajkvr@gmail.com
PRADEEP.T Phone: +91 8086877459
LinkedIn: linkedin.com/in/pradeep-t-ab670888
Generative AI Specialist GitHub: github.com/pradeepdev-1995
PyPi : https://pypi.org/user/pradeep_dev/
Location: Palakkad, Kerala, India
PROFESSIONAL SUMMARY:
PROFESSIONAL EXPERIENCE:
AI Engineer with 5.6 years of experience in
Generative AI, Machine Learning, Agentic AI and Specialist - Product Engineering, LTIMindtree
Data Science. Recognized for developing cutting- Bangalore, India | 2020 - Present
edge AI solutions and actively contributing to the AI Contributed to the development of a Generative AI-powered
community through mentorship, publications, and copilot assistant, enhancing productivity across the data-to-
decision process using Langchain and llamaindex.
open-source projects. Skilled in leveraging advanced Implemented Retrieval-Augmented Generation (RAG)
models and techniques for building scalable AI techniques, achieving an accuracy improvement of up to
systems. Known for a strong focus on innovation 95% for user query responses.
and collaboration. Fine-tuned various LLMs (e.g., LLAMA, GPT models) using
PEFT methods like LORA, QLORA, and DPO, optimizing
CORE SKILLS: model performance for diverse use cases.
Generative AI & LLMs: Expertise in designing, fine- Developed semantic caching solutions using Redis Stack
tuning, and deploying advanced LLMs (GPT-3.5, GPT- Server, reducing query latency by 50% and lowering LLM
4, LLAMA, Mistral). Skilled in advanced Retrieval- inference costs.
Augmented Generation (RAG), Multiple types of Integrated safety measures (Nemo Guardrails) to mitigate
Embeddings and Vector Databases and Snowflake risks of prompt injections and off-topic responses, ensuring
robust AI interactions.
Prompt Engineering & Fine-Tuning: Proficient in
creating optimized prompts and fine-tuning LLMs Data Analyst, Lymbyc
using techniques like dynamic prompting, few-shot Bangalore, India | 2019 - 2020
Collaborated on the development of Leni, a business
learning, and PEFT methods (LORA, QLORA, DPO,
intelligence system enabling strategic insights through
VERA).
natural language queries.
Machine Learning & Deep Learning: Experienced in Designed and trained deep learning models (LSTM, SVM)
building and optimizing ML & DL (RNNs, LSTMs) for text classification, with a focus on NLP techniques such
using Python frameworks (Scikit-learn, TensorFlow, as NER, stemming, and lemmatization, achieving up to 93%
Keras), Model Deployment and implementing end-to- model accuracy.
end machine learning solutions, including Model Implemented query-to-MQL format conversion using NLP
training, Validation, and deployment pipelines for libraries (Spacy, NLTK), streamlining the process of
complex problems using MLOPS. translating user queries into structured outputs.
Agentic AI: Skilled in developing and applying Agentic
KEY PROJECTS:
AI techniques for decision-making using Langgraph &
User Guide Q&A System with RAG:
CrewAI.
Developed a robust question-answering system
Natural Language Processing: Expertise in NLP tasks leveraging Retrieval-Augmented Generation (RAG) with
(text classification, NER, sentiment analysis) using hybrid search techniques.
libraries like NLTK, Spacy, and Hugging Face Integrated multiple vector databases like Chroma,
Transformers. Proficient in context-independent and Pinecone, and PostgreSQL along with vector libraries
context-dependent embeddings (Word2Vec, GloVe, (Annoy, Faiss) for high-accuracy retrieval.
BERT, RoBERTa). Achieved a response accuracy of 92-95% by
Data Analysis & Visualization: Skilled in data implementing advanced RAG modules such as Metadata
Filtering, Query Expansion, and Hypothetical Document
wrangling, EDA, and statistical analysis using Pandas,
Embeddings (HyDE).
NumPy, SciPy. Proficient in data visualization with
Conducted research on Retrieval Interleaved Generation
Matplotlib, Seaborn, and Plotly. (RIG) to enhance system robustness and accuracy in
DevOps & CI/CD : Docker,Kubernets , Github Actions complex user queries.
and JIRA
Agentic AI-based Business Advisor for Strategic Trained a deep learning model for text classification
Decision-Making using OODA framework: using an LSTM layer within the Keras framework
Built an autonomous business advisor using modern Applied both context-independent (Word2Vec, Skip-
agentic frameworks like Langgraph, CrewAI, and gram, CBOW, GloVe) and context-dependent
Langchain. embeddings (BERT, FastText) for NLP tasks, enhancing
Implemented advanced segmentation and churn analysis
model performance by improving its understanding of
using clustering techniques (K-Means, DBSCAN) to
semantic nuances.
provide strategic insights for business optimization.
Utilized dynamic prompting strategies (Chain of Thought,
Semantic Caching for Generative AI Applications:
Tree of Thought) to enhance the decision-making
capabilities of the advisor, addressing high cardinality
Developed a caching layer using Redis Stack Server,
business data scenarios. tailored for Generative AI applications that rely on
Guardrails for Enhanced AI TRUST and Safety: frequent LLM queries.
Implemented robust AI safety mechanisms using Nemo The semantic caching mechanism reduced redundant
Guardrails to prevent prompt injections, jailbreaking, and LLM calls by approximately 50%, leading to faster
off-topic questions. response times and significant cost savings.
Conducted research and integration of Cortex Guard and Integrated the caching solution with existing RAG
Langkit, focusing on improving the safety and compliance modules, demonstrating its effectiveness in reducing
of AI responses in enterprise environments. inference latency without compromising accuracy.
Developed a comprehensive Ethical AI Trust and Safety
framework for evaluating the effectiveness of guardrails,
PATENTS
reducing instances of unauthorized outputs by 90%.
Implemented Responsible And Ethical AI framework from METHOD AND SYSTEM FOR PROCESSING
scratch and practiced across the organization ANALYTICAL QUERIES TO EXTRACT BUSINESS
Enhanced system reliability in high-risk deployments,
INSIGHTS AND SUPPORT DECISION-MAKING FROM
making the AI models safer and more aligned with user
ENTERPRISE DATA USING GENAI AGENTS - Patent
expectations
Filed
Advanced LLM Fine-Tuning & Optimization:
METHOD AND SYSTEM FOR PROCESSING NATURAL
Fine-tuned various LLMs (GPT-3.5, GPT-4, LLAMA,
Mistral) using Parameter-Efficient Fine-Tuning (PEFT) LANGUAGE QUERIES FOR GENERATING
techniques like LORA, QLORA, and DPO for a narrative ANALYTICAL INSIGHTS USING LLM - Patent Filed
generation use case. EDUCATIONAL BACKGROUND
Achieved an accuracy improvement of 90-92% by
implementing Key-Value caching, quantization, and other M.Tech in Computational Linguistics
inference speed optimization techniques (Flash Attention, Kerala Technological University | 2017 - 2019
DeepSpeed, VLLM). B.Tech in Computer Science
Conducted extensive experiments with different fine- Cochin University of Science and Technology | 2012 -
tuning strategies (VERA, DoRA, ORPO) to enhance the 2016
models’ performance across diverse tasks. CERTIFICATIONS
Demonstrated substantial reduction in model inference
time and resource utilization, improving scalability and AWS Machine Learning (Coursera)
user response time. Data-Driven Decisions with Power BI (Coursera)
Database Experience: Foundations: Data, Data, Everywhere
Proficient in SQL and NoSQL databases, including Google Generative AI
PostgreSQL, MongoDB, Snowflake, and others.
Cloud Platforms:
PUBLICATIONS & OPEN-SOURCE CONTRIBUTIONS:
Experienced in working with AWS and Azure for deploying Authored articles on Analytics Vidhya, Medium, and
and managing scalable applications. Fosfor, focusing on advanced AI techniques.
DevOps & CI/CD:
Hands-on experience with CI/CD pipelines using tools like Developed and published the 'databalancer' Python
Docker, Kubernetes, JIRA, GITHUB ACTIONS and other package on PyPI, achieving over 110,00 downloads
containerization and orchestration platforms to globally.
streamline development and deployment workflows. AWARDS & ACHIEVEMENTS:
Encoder Models Transfer Learning
Implemented transfer learning on various encoder Best AI Project Award: Recognized for developing a
models, including BERT, Albert, RoBERTa, and DistilBERT, state-of-the-art NLP to NoSQL query conversion tool
utilizing techniques such as layer freezing and layer during M.Tech.
addition for a user query intent classification problem, Top Contributor Award: Honored for AI mentorship and
achieving an accuracy rate of approximately 97% community training initiatives across multiple
institutions.