- I’m fascinated by failure modes in models — knowing when AI fails is as important as when it succeeds.
- I love building little tools that save 5 minutes per day — they multiply across months and teams.
- I read non-tech books (philosophy, fiction) to keep my thinking flexible — they often spark ideas for models.
- When stuck on a bug, I sometimes sketch ideas on paper and debug by hand (yeah, pen and paper).
- Data-centric AI pipelines — cleaning, annotating, versioning, and evaluating datasets for robust outcomes.
- Edge & deployment challenges — making computer vision models efficient with pruning, quantization, and distillation.
- Explainability & interpretability — building models that justify and audit their predictions (SHAP, Grad-CAM, concept methods).
- Connecting research to users — building APIs, dashboards, and UIs that make ML research accessible.
- Automating the “boring but critical” parts of research — data validation, visualization, and reproducibility.
- Designing human-in-the-loop pipelines where people and models collaborate, not compete.
- Bridging the gap between theoretical ML research and usable software.
- Making AI trustworthy enough for clinicians, researchers, and everyday users.
Here are a few projects that capture my current direction:
- 🩺 Explainable AI Medical Imaging Platform — Multi-modal diagnostic system (MRI, CT, X-ray) with natural language explanations.
- 💡 Research Tools & Experiments — Lightweight utilities that improve ML workflow efficiency.
(Expect more soon — small tools, datasets, and visual explainers 👀)
“Often the best model is the one you can understand.”