-
NVIDIA
Stars
Backward compatible ML compute opset inspired by HLO/MHLO
Ongoing research training transformer models at scale
Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4
Pytorch process group third-party plugin for UCC