Highlights
Lists (22)
Sort Name ascending (A-Z)
00. ⌨️ Dev Environments ⌨️
Technologies, tools, and configurations I use for my development environment.01. 🦙 Model: LLM 🦙
Repositories for Large Language Models (LLM), frameworks, and datasets.01. 💾 Stack: Backend 💾
Frameworks, libraries, and technologies for building and maintaining backend systems.01. 📡 Stack: DevOps 📡
Tools, platforms, and practices for DevOps, including CI/CD, automation, and infrastructure.01. 🛰️ Stack: MLOps 🛰️
A comprehensive list of tools and platforms for Machine Learning Operations (MLOps).01. 🐍 Stack: Python 🐍
Useful Python modules and libraries that I frequently use.01. 🚄 Stack: XPU 🚄
Libraries, tools, and frameworks for XPU (GPU, TPU, NPU, ...) programming, focusing on CUDA and kernel optimization.02. 🤖 Model: ML 🤖
Repositories for Machine Learning (ML) models, frameworks, and datasets.02. 🧑🏻🔧 Stack: Agent 🧑🏻🔧
Repositories related to AI Agents, including LLM-based autonomous agents and multi-agent systems.03. 🐥 Stack: Frontend 🐥
Frameworks, libraries, and tools for frontend development.98: 🧑🏭 Study: Agent 🧑🏭
A study list of repositories for learning about and experimenting with AI Agents.98. 💿 Study: Backend 💿
Educational resources and projects for learning backend development.98. 🔬 Study: DevOps 🔬
A study guide to DevOps, containing tutorials, guides, and example projects.98. 👶🏻 Study: Frontend 👶🏻
Learning resources for frontend technologies, frameworks, and design patterns.98. 🦾 Study: ML 🦾
Educational materials, tutorials, and projects for studying Machine Learning (ML).98. 🚀 Study: MLOps 🚀
A curated study list for understanding and implementing MLOps pipelines and practices.98. 🚅 Study: XPU 🚅
Resources dedicated to learning XPU (GPU, TPU, NPU, ...) programming, focusing on CUDA and GPU Kernels.99. 🐘 BOAZ 🐘
Projects and materials from my activities at BOAZ, Korea's first inter-university big data club.99. 🔭 Career 🔭
Resources for career development, interview preparation, and job searching.99. 🧗 Conferences 🧗
My speaking engagements, presentations, and materials from conferences.99. 💻 Etc. 💻
Miscellaneous repositories, tools, and interesting projects that don't fit into other categories.99. 📑 GitHub Pages 📑
Tools, themes, and examples for building static sites with GitHub Pages.Starred repositories
Self-hosted platform for running coding agents (Claude Code, Codex) in isolated sandboxes with vault proxy.
A collection of DESIGN.md files inspired by popular brand design systems. Drop one into your project and let coding agents generate a matching UI.
Send User Notifications on macOS from the command-line.
OmX - Oh My codeX: Your codex is not alone. Add hooks, agent teams, HUDs, and so much more.
Definition, proposals, and conformance tests for AI Conformance
Use Codex from Claude Code to review code or delegate tasks.
Mount Hugging Face Buckets and repos as local filesystems. No download, no copy, no waiting.
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Teams-first Multi-agent orchestration for Claude Code
A Claude Code plugin that shows what's happening - context usage, active tools, running agents, and todo progress
Prometheus exporter for Starlette and FastAPI
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
🏠 Ask Claude about Korean apartment prices — powered by 국토교통부 open API
Browser automation CLI for AI agents
Official inference repo for FLUX.2 models
Collection of step-by-step playbooks for setting up AI/ML workloads on NVIDIA DGX Spark devices with Blackwell architecture.
Curated collection of AI inference engineering resources — LLM serving, GPU kernels, quantization, distributed inference, and production deployment. Compiled from the AER Labs community.
Kernel sources for https://huggingface.co/kernels-community
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.
Dynamic Memory Management for Serving LLMs without PagedAttention
NVIDIA vGPU Device Manager manages NVIDIA vGPU devices on top of Kubernetes