Lists (32)
Sort Name ascending (A-Z)
🌟 Bash
C++
Cmake
compiler
Computer Graphics
courses
database system
hardware
library
llvm
mips
MLsys
🚀 My stack
networking
operating system
🌟 Perf
Reading
🌟 research
⭐ eBPF
🌟 Benchmark
🌟 Collections
🌟 Config
🌟 Github action
🌟 Latex
🌟 LLM
🌟 LLVM
🌟 PGO
🌟 tips
🌠 bash
🌠 Latex
style
TODOs
Stars
A collection of custom skills for Claude Code, teaching the agent how to use specific APIs and SDKs correctly.
Bash is all you need - A nano claude code–like 「agent harness」, built from 0 to 1
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
A framework for efficient model inference with omni-modality models
DFlash: Block Diffusion for Flash Speculative Decoding
omo; the best agent harness - previously oh-my-opencode
C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.
Train speculative decoding models effortlessly and port them smoothly to SGLang serving.
verl: Volcano Engine Reinforcement Learning for LLMs
slime is an LLM post-training framework for RL Scaling.
gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling
Large Language Model (LLM) Systems Paper List
A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.
Is Parallel Programming Hard, And If So, What Can You Do About It?
AIInfra(AI 基础设施)指AI系统从底层芯片等硬件,到上层软件栈支持AI大模型训练和推理。
🌟100+ 原创 LLM / RL 原理图📚,《大模型算法》作者巨献!💥(100+ LLM/RL Algorithm Maps )
Enable macOS HiDPI and have a native setting.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
Distributed Compiler based on Triton for Parallel Systems
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.