-
UT Austin
- Austin, TX
-
13:13
(UTC -08:00) - https://philippe-eecs.github.io/website/
- in/philippe-hansen-estruch-b05559210
Stars
Chillee / vllm
Forked from vllm-project/vllmA high-throughput and memory-efficient inference and serving engine for LLMs
Wan: Open and Advanced Large-Scale Video Generative Models
A PyTorch native platform for training generative AI models
Kimi K2 is the large language model series developed by Moonshot AI team
Accessible large language models via k-bit quantization for PyTorch.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
GenEval: An object-focused framework for evaluating text-to-image alignment
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Movie Gen Bench - two media generation evaluation benchmarks released with Meta Movie Gen
Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI
Author's Implementation for E-LatentLPIPS
USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference
SGLang is a fast serving framework for large language models and vision language models.
Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)
[NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"
[NeurIPS 2025] An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention
Code for the paper: "Learning to Reason without External Rewards"
A unified inference and post-training framework for accelerated video generation.
[ICCV 2025] Scaling Inference-Time Optimization for Text-to-Image Diffusion Models via Reflection Tuning