Skip to content
View Lingavasan's full-sized avatar

Highlights

  • Pro

Block or report Lingavasan

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Lingavasan/README.md

Lingavasan Suresh Kumar

Data Engineer · AI/ML Engineer · LLM Systems · AI Researcher · Data Scientist

Typing SVG

📍 Tempe, AZ  |  📞 +1 (480) 819-2073  |  📧 lsuresh4@asu.edu

LinkedIn GitHub Email


🙋‍♂️ About

I build reliable data systems and production-grade AI that hold up when the environment is messy: incomplete inputs, shifting requirements, compute budgets, latency targets, and stakeholders who need clarity—not complexity.

My identity is shaped by two instincts:

  • 🏗️ Engineering discipline — data contracts, observability, quality gates, reproducibility, secure-by-default architecture.
  • 🔬 Research curiosity — building systems that learn, reason, adapt; especially agentic LLM behavior, long-horizon workflows, and memory under constraints.

I operate like a manager even when the title isn't explicit: I translate ambiguity into a plan, build alignment, define success metrics, unblock teams, and deliver outcomes with clean technical ownership.

What I'm optimizing for

Focus Practice
Clarity before execution Problem definition, constraints, success metrics, decision boundaries
Systems that scale Reusable pipelines, versioned data, automation over one-off scripts
Operational excellence Observability, incident playbooks, failure-mode thinking, QA
Stakeholder trust Results → decisions, not dashboards nobody uses
Strong interfaces APIs, data contracts, modular components, documented ownership

Core Interests

  • 🤖 AGI Foundations — Practical building blocks for general-purpose intelligence: memory, reasoning stability, alignment, long-horizon planning, tool use, and resource-aware decision-making (latency, cost, context limits).
  • 🧠 Agentic LLM Systems — Memory architectures, retrieval policies, evaluation harnesses, and controllability for multi-step agents that stay consistent, grounded, and auditable.
  • 🔀 AI × Data Engineering — Production stacks where pipelines feed models and models drive products—with strong data quality, observability, and measurable reliability end-to-end.

💼 Experience

Data Engineer — Arizona State University (ASU), Tempe, AZ

Nov 2024 – Present

Operating at the intersection of data reliability and mission-critical operations, focusing on pipelines and systems where correctness, security, and repeatability matter.

Technical Focus

  • Cloud data systems on AWS (Redshift, Lambda): warehousing, secure access design, operational automation.
  • Data quality and trust: validation gates (Great Expectations), reconciliation logic, systematic error detection.
  • Engineering robustness: diagnostics for distributed tasks (Python + C++), instrumentation, rapid triage.

Key Contributions

  • 🏛️ Governed pipeline architecture — Designed Airflow- and dbt-orchestrated warehouse pipelines with clear inputs/outputs, stable SQL schemas, and strict IAM permission boundaries. Prioritized secure-by-default designs: OAuth/SAML-enforced access, traceable execution, auditable flows.
  • Data quality as a system — Implemented Great Expectations validation and discrepancy detection to prevent silent failures. Built repeatable reconciliation checks so downstream consumers trust the dataset without manual verification loops.
  • 🚦 Operational readiness — Developed automated diagnostics (Python + C++) with PyTest/UnitTest coverage to reduce time-to-detect and time-to-recover. Containerized services with Docker; designed CI/CD pipeline workflows that anticipate failure modes (partial loads, schema drift, missing partitions, delayed upstream jobs).
  • 🔗 API-driven integrations — Exposed pipeline health and data-quality endpoints via FastAPI-based RESTful APIs consumed by downstream monitoring dashboards.

Key Soft Skills

🗣️ Stakeholder communication — Translated complex data failures and pipeline behavior into plain language for non-technical partners and research leads.  |  🎯 Ownership & accountability — End-to-end responsibility from design to on-call reliability with no handoff gaps.  |  🧩 Problem-solving under ambiguity — Diagnosed undocumented systems and missing data contracts from scratch; built clarity before building solutions.  |  🔬 Attention to detail — Zero-tolerance approach to silent data failures: validation-first design as a habit, not a checklist.

Python C++ SQL FastAPI RESTful APIs AWS Redshift AWS Lambda Airflow dbt Great Expectations PyTest UnitTest Docker CI/CD Git OAuth IAM Data Governance


Assistant Content & SEO Manager — NHL, Sportskeeda (India)

Dec 2023 – Jul 2024

A hybrid role combining analytics, performance strategy, forecasting, and operational execution with direct impact on business outcomes.

Key Contributions

  • 📊 KPI ownership — Managed performance analytics across editorial, product, growth, and operations stakeholders using Power BI, Tableau, and Google Analytics dashboards. Translated metrics into action: not "what happened," but "what to do next."
  • 📈 Forecasting & planning — Built time-series forecasting models (Python · Pandas · NumPy · SciPy) and scenario planning frameworks for quarterly planning. Applied statistical modeling and causal inference to set measurable guardrails: target setting, risk ranges, resource allocation logic.
  • ⚙️ Automation mindset — Automated recurring analysis and reporting workflows (Python + SQL) to reduce manual cycles, improve consistency, and surface insights via D3.js-powered interactive reports.

Key Soft Skills

📢 Data storytelling — Turned raw metrics into narratives that moved editorial and business decisions, not just filled dashboards.  |  🤝 Cross-functional influence — Collaborated with editorial, product, and growth teams without formal authority; built buy-in through data credibility.  |  🧠 Strategic thinking — Connected weekly KPIs to quarterly business outcomes and surfaced trends before they became problems.  |  ⏱️ Deadline-driven execution — Delivered consistent reporting cycles in a fast-paced, high-volume sports media environment.

Python SQL Pandas NumPy SciPy Power BI Tableau Google Analytics D3.js Time-Series Forecasting Statistical Modeling Causal Inference


AI Prompt Engineer (Freelance) — Scale AI (Remote, USA)

Oct 2023 – Jan 2024

High-quality prompt tasks supporting SFT and RLHF workflows, strengthening intuition for model behavior under ambiguity and failure patterns (hallucination, instruction drift).

  • Produced high volumes of prompt tasks with consistent structure, clarity, and evaluability using OpenAI models and LangChain-based evaluation pipelines.
  • Treated prompts like interfaces: inputs, constraints, expected outputs, and test cases — applying Ragas-style evaluation criteria to measure response quality and alignment.
  • Designed tasks to surface edge cases and reasoning breakdowns, not just happy-path outputs; contributed to prompt & context management frameworks for multi-turn reliability.

Key Soft Skills

🔍 Critical & adversarial thinking — Actively sought failure modes, edge cases, and subtle misalignments rather than accepting plausible-sounding outputs.  |  📐 Precision & consistency — Maintained uniform task structure across high-volume output under time pressure.  |  🕰️ Async self-management — Delivered independently without oversight in a fully remote freelance context.  |  💡 AI product intuition — Developed deep sensitivity to how model behavior shifts with instruction phrasing, context length, and ambiguity.

OpenAI Models Prompt & Context Management RLHF SFT LangChain Ragas Python


First ML/AI Engineer — Uniqlabs (Develup), Bangalore

Sep 2021 – Nov 2023

Joined as the first (founding) ML/AI engineer to build the AI layer of an early-stage, AI-powered job portal — a platform comparable in mission to what Jobright and Simplify (AI-powered job matching and application platforms) are today. Designed and shipped models and intelligent systems from scratch: no existing ML codebase, no prior AI team, greenfield from day one.

Key Contributions

  • 🔍 Retrieval/ranking improvements (BERT + Transformers) — Fine-tuned Hugging Face Transformer models (PyTorch · TensorFlow · Keras) for job-candidate relevance ranking; applied NLTK and spaCy for text preprocessing. Improved retrieval quality systematically: isolate error types, target data quality, validate metrics (Ragas/custom harnesses) after each change.
  • 🔄 ETL/ELT pipelines for ML workloads — Built scalable Python + Apache Spark pipelines ingesting from PostgreSQL/MySQL and MongoDB, reducing latency and improving throughput to keep model data fresh and production-ready. Exposed processed data through FastAPI RESTful APIs.
  • 🎯 Recommendation / skill-gap systems — Designed RAG-enhanced job recommendation and skill-gap analysis systems using LangChain and vector databases (Pinecone), mapping user profiles → role requirements → personalized learning paths. Applied scikit-learn and XGBoost for feature-based baseline classifiers and ranking models; focused on actionable signals over vanity metrics.
  • 🐳 Production integration — Packaged models with Docker, tracked experiments with MLflow, and deployed via microservices architecture on Kubernetes with CI/CD pipelines.

Key Soft Skills

🏗️ Founding team mindset — Built from zero with minimal resources: made early architectural decisions that shaped the product's AI trajectory.  |  🚀 Builder's autonomy — Owned the full ML lifecycle — problem framing, data, modeling, evaluation, deployment — without a team to delegate to.  |  🤝 Cross-functional collaboration — Partnered directly with product, design, and engineering to translate user needs into model behavior and measurable product outcomes.  |  📣 Technical leadership — Communicated model limitations, tradeoffs, and roadmap priorities to non-technical founders and stakeholders.

Python PyTorch TensorFlow Keras scikit-learn XGBoost Hugging Face BERT Transformers LangChain RAG Pinecone NLTK spaCy FastAPI RESTful APIs Microservices PostgreSQL MySQL MongoDB Apache Spark ETL/ELT Docker Kubernetes MLflow CI/CD


🔬 Research (Thesis)

MemoryArchitect: Policy-Driven Memory Governance for LLM Agent Systems

Arizona State University (ASU) · Jul 2025 – Present

Most agentic LLM systems fail quietly in long-horizon tasks: either the full chat history is stuffed into the prompt until the context window breaks, or naïve RAG retrieves similar but not useful content, silently averaging conflicting facts into hallucinations. The real problem is not "retrieve more." It is govern what enters memory, how long it survives, what gets retrieved, and how contradictions are resolved—all under hard token budgets, without retraining the model.

What I built

MemoryArchitect is a model-agnostic, external memory governance layer for LLM agents that treats memory as a constrained, auditable resource rather than a passive log. The same governance engine can harden many different agent systems in real settings—no base-model retraining required.

Governance Stage What It Controls
Write policy What gets stored — filters noise, duplicates, and injection attempts at ingest time
Metadata & provenance Each memory item is tagged with trust score, time scope, sensitivity level, and source
TTL / decay When memories expire or fade — configurable forgetting behavior per memory type
Episodic → semantic consolidation How recent traces are compressed into durable, compact summaries
Retrieval eligibility What is allowed to be retrieved — policy-gated, not just similarity-ranked
Contradiction detection Conflicting facts are flagged and resolved before reaching the prompt
Token budget arbitration Memory items compete for context slots by benefit-per-token under a hard window constraint
Compliance layer Right-to-be-forgotten deletion cascades; toxic/injection-resistant "don't store this" rules

Core Technical Ideas

  • Governance-first memory — Explicit, configurable policies govern every stage of the memory lifecycle: write, store, decay, consolidate, retrieve, and resolve. No implicit "just embed and retrieve."
  • Contradiction handling — Conflicting facts are detected via semantic similarity + logical consistency checks and resolved (most recent / highest trust wins) before context assembly, preventing the agent from hallucinating averaged facts.
  • Benefit-per-token budgeting — At runtime, the system ranks policy-eligible memory items by expected task contribution divided by token cost, packs the context window to the hard limit, compresses when needed, and logs every decision for transparency and debugging. Vector stores (Pinecone · LanceDB) back the retrieval layer; n8n orchestrates automated workflow triggers and external system integrations.
  • Provenance & auditability — Every stored item carries metadata (source, timestamp, trust score, sensitivity). Every memory action—write, decay, eviction, retrieval—is logged, making agent behavior inspectable and debuggable.
  • Compliance-ready — Deletion cascades propagate right-to-be-forgotten requests across the memory store. Write-time rules block toxic content and prompt-injection patterns from entering memory at all.

Initial Evaluation Results

Our governance engine outperformed most existing memory baselines across long-horizon recall, persona consistency, and contradiction rate. Crucially, context window utilization stayed below 47% at all times—even across arbitrarily long conversations—demonstrating that tighter memory governance, not larger context windows, is the path to cost- and latency-efficient agentic systems that maintain long-term conversations perfectly.

Impact

Dimension Outcome
💰 Cost & latency Token usage held below 47% of the context window at all times, reducing per-turn cost and inference latency
🧠 Long-term conversation quality Episodic consolidation + contradiction resolution enables perfect long-term conversation maintenance across extended multi-turn conversations
🎯 Persona consistency Policy-enforced identity traces prevent agent drift over extended, multi-session conversations
🚫 Hallucination reduction Contradiction resolution stops conflicting facts from reaching the model — no more "averaged" hallucinations
🔒 Deployable compliance Right-to-be-forgotten + injection-resistant write rules, production-ready out of the box
🔌 Model-agnostic Governance layer is fully decoupled from the base model — the same engine hardens any LLM agent stack

AGI relevance — Scalable general-purpose intelligence requires memory that is controlled, auditable, and useful. Agents need governance policies—not retrieval heuristics—for reliable recall. Resource allocation under hard constraints is a foundational problem for any long-horizon reasoning system.

Python LangChain LangGraph OpenAI Models Hugging Face Transformers RAG Pinecone LanceDB Ragas n8n MLflow Vector Databases Prompt & Context Management


🚀 Selected Projects

High-Performance Video Prediction & Profiling Benchmark

Arizona State University (ASU) · Jun 2024 – Present

  • Implemented a transformer-based forecasting approach (PyTorch + Hugging Face) inspired by iVideoGPT for 20+ frame prediction, with OpenCV-based frame preprocessing.
  • Developed a modular evaluation and profiling framework (NumPy · SciPy · Pandas · Matplotlib) measuring temporal coherence, error accumulation over rollouts, and system performance constraints.
  • Robustness testing — Evaluated model stability through causal-inference-style interventions (shift, removal, duplication) to test whether learned representations are stable or brittle.

Real systems must tolerate distribution shifts, missing inputs, and inconsistent streams — not just work on a dataset.

Python PyTorch Hugging Face Transformers OpenCV NumPy SciPy Pandas Matplotlib Causal Inference


📄 Publication

Multimodal AI-Based Workload Relocation Strategy for Reducing Carbon Emissions in Multi-Cloud Environments

G. Venugopal, P. K. Badiga, L. S. Kumar and R. R. Datpally, "Multimodal AI-Based Workload Relocation Strategy for Reducing Carbon Emissions in Multi-Cloud Environments," 2025 2nd International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF), Chennai, India, 2025, pp. 1–6, doi: 10.1109/ICECONF65644.2025.11379581

IEEE Xplore · 2025 — 2nd ICECONF, Chennai, India

Keywords: Multimodal AI · workload relocation · carbon emissions · multi-cloud environments · reinforcement learning · deep learning · energy efficiency · PyTorch · Hugging Face Transformers · Ray RLlib

Problem — Multi-cloud workloads are scheduled for cost/performance, rarely for sustainability. Carbon intensity varies significantly across cloud regions and time, yet schedulers ignore it.

Contribution — A multimodal framework combining:

  • 🤖 RL policy (Ray RLlib) for adaptive workload placement decisions
  • 🧠 LSTM forecasting for predicting carbon intensity and energy consumption across cloud regions
  • 🔄 Transformer forecasting (Hugging Face) for real-time workload demand modeling
  • 📡 Real-time API data ingestion — live carbon intensity and energy monitoring streams
  • 🧹 Dataset preprocessing — cleaning, standardization, and normalization pipelines (PyTorch + Pandas)
  • ⚖️ Constraint-based optimization — balancing performance SLAs against environmental targets

…to relocate workloads intelligently across clouds, reducing carbon emissions while maintaining efficiency.

Outcome — Demonstrated measurable reduction in carbon emissions and improved energy efficiency through carbon-intensity-driven, policy-based relocation decisions.

Ray RLlib PyTorch Hugging Face Transformers LSTM Carbon-aware scheduling Energy consumption modeling Constraint-based optimization Real-time API ingestion Pandas Python


🛠️ Technical Skills

🐍 Backend & Core Programming

Python SQL C++ C Java FastAPI Django Flask

Python (Django · FastAPI · Flask) · SQL · C++ · C · Java · RESTful APIs · Microservices Architecture


🤖 Production AI & LLM Systems

OpenAI LangChain Hugging Face

OpenAI Models · Prompt & Context Management · RAG · LangChain · LangGraph · Hugging Face · Transformers · RLHF · SFT · Ragas · n8n · Vector Databases (Pinecone · LanceDB)


🗄️ Databases & Data Modeling

PostgreSQL MySQL MongoDB

Relational Databases (PostgreSQL · MySQL) · Advanced Data Modeling (Indexing · Foreign Keys) · Schema Design · MongoDB · ETL/ELT Pipelines


☁️ Cloud Infrastructure & DevOps

AWS GCP Docker Kubernetes Terraform Apache Airflow Apache Spark Apache Kafka dbt Databricks Snowflake

AWS (Redshift · Lambda) · GCP (BigQuery · Composer) · Docker · Kubernetes · Terraform · Apache Spark · Kafka · Hadoop · Databricks · Snowflake · dbt · SQLMesh · Airflow


🔒 Engineering Excellence & Security

Git pytest Wireshark

CI/CD Pipelines · Git · Automated Testing (PyTest · UnitTest) · Data Quality (Great Expectations) · Data Governance · MLflow · OAuth · SAML · Wireshark · Network Protocols (RTP · SIP · WebRTC)


📊 Machine Learning & Quantitative Analytics

PyTorch TensorFlow scikit-learn OpenCV Power BI Tableau

PyTorch · TensorFlow · Keras · scikit-learn · XGBoost · OpenCV · NLTK · spaCy · Librosa · Pandas · NumPy · SciPy · Power BI · Tableau · Google Analytics · D3.js · Time-Series Forecasting · Causal Inference · Statistical Modeling


🌱 Sustainable AI Systems & Multi-Cloud Optimization

(From IEEE publication — ICECONF 2025)

Carbon-aware workload relocation · Energy consumption modeling · Carbon intensity–driven scheduling · Hybrid RL + DL framework (Ray RLlib + PyTorch) · LSTM forecasting · Hugging Face Transformer forecasting · Real-time monitoring data ingestion via APIs · Dataset preprocessing (cleaning + standardization) · Constraint-based optimization (performance + environmental)


🏆 Leadership & Professional Activities

  • 📝 Reviewer — ICLR (International Conference on Learning Representations)
  • 🤝 Contributor — OpenAI initiative (group-chat / context feature workstream)
  • 🌐 Experienced in cross-functional leadership: product, engineering, operations, and executive-facing reporting

📊 GitHub Stats

  

GitHub Streak


⚙️ How I Work

I build for readability, reproducibility, and ownership. I treat quality as a product feature, not an afterthought. I respect constraints — latency, budget, tokens, and people's attention. I write documentation like someone will inherit my system tomorrow.


Profile Views

Popular repositories Loading

  1. SML SML Public

    Python

  2. mini-sudoku-comp-version mini-sudoku-comp-version Public

    Forked from gvenugo3/mini-sudoku

    6x6 Mini Sudoku puzzle game - Play the LinkedIn-style Mini Sudoku in your browser

    JavaScript

  3. Lingavasan Lingavasan Public

  4. TensorRT TensorRT Public

    Forked from NVIDIA/TensorRT

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

    C++

  5. onnxruntime onnxruntime Public

    Forked from microsoft/onnxruntime

    ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

    C++