Lists (1)
Sort Name ascending (A-Z)
Stars
微舆:人人可用的多Agent舆情分析助手,打破信息茧房,还原舆情原貌,预测未来走向,辅助决策!从0实现,不依赖任何框架。
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
A unified inference and post-training framework for accelerated video generation.
Codebase of GRPO: Implementations and Resources of GRPO and Its Variants
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
Pixel-Level Reasoning Model trained with RL [NeuIPS25]
🚀 Efficient implementations of state-of-the-art linear attention models
📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉
Dexbotic: Open-Source Vision-Language-Action Toolbox
[arXiv 2025] Generative View Stitching
A Curated List of Awesome Works in World Modeling, Aiming to Serve as a One-stop Resource for Researchers, Practitioners, and Enthusiasts Interested in World Modeling.
rCM: SOTA Diffusion Distillation & Few-Step Video Generation
LongLive: Real-time Interactive Long Video Generation
Lumina-Image 2.0: A Unified and Efficient Image Generative Framework
ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous Driving
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
[SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"
BranchGRPO: Stable and Efficient GRPO with Structured Branching in Diffusion Models
MOSAIC: Multi-Subject Personalized Generation via Correspondence-Aware Alignment and Disentanglement
codes for R-Zero: Self-Evolving Reasoning LLM from Zero Data (https://www.arxiv.org/pdf/2508.05004)
[NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ MoE ckpt released! Only 4GB VRAM is enough to run!
The simplest, fastest repository for training/finetuning small-sized VLMs.