I've spent the last several years building and running production systems either solo, or as a team lead. API design, multi-tenant SaaS products, async pipelines, Terraform, Containerization, the whole stack. Now I'm applying that infrastructure background to LLM systems.
I'm less interested in prompt magic and more interested in the engineering that makes AI applications actually work at scale: retrieval, evals, structured outputs, observability, and cost control.
Working through a structured curriculum covering LLM APIs, embeddings, RAG, evals, and agent systems. Each topic ships as a production-shaped project instead of a tutorial clone.
I’ve spent most of my career as the sole or lead engineer on production systems, owning the whole stack from data model to deployment. That end-to-end ownership matters in AI engineering, where most of the hard problems turn out to be retrieval, latency, and data quality, not the model itself.
Strong in: PHP/Laravel, TypeScript/Node.js, MariaDB/Mysql, AWS (Lambda, ECS, EventBridge, S3, CloudFront), Terraform, Docker, CI/CD.
Backend and AI engineering roles. Remote preferred, Colorado-based. Available early 2027.