Skip to content
@MooreThreads

Moore Threads Corporation

Popular repositories Loading

  1. Moore-AnimateAnyone Moore-AnimateAnyone Public

    Character Animation (AnimateAnyone, Face Reenactment)

    Python 3.5k 289

  2. torch_musa torch_musa Public

    torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.

    Python 477 34

  3. LiteGS LiteGS Public

    A refactored codebase for Gaussian Splatting. Training 3DGS in 50 seconds!

    Cuda 331 29

  4. MooER MooER Public

    MooER: Moore-threads Open Omni model for speech-to-speech intERaction. MooER-omni includes a series of end-to-end speech interaction models along with training and inference code, covering but not …

    Python 219 17

  5. MobiMaliangSDK MobiMaliangSDK Public

    Python 122 11

  6. SimuMax SimuMax Public

    a static analytical model for LLM distributed training

    Python 116 15

Repositories

Showing 10 of 52 repositories
  • muAlg Public Forked from NVIDIA/cub

    Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl

    MooreThreads/muAlg’s past year of commit activity
    Cuda 6 BSD-3-Clause 512 0 0 Updated Feb 11, 2026
  • torchada Public

    Adapter package for torch_musa to act exactly like PyTorch CUDA

    MooreThreads/torchada’s past year of commit activity
    Python 19 MIT 2 0 1 Updated Feb 10, 2026
  • MooreThreads/tensorflow_musa_extension’s past year of commit activity
    C++ 1 6 0 2 Updated Feb 6, 2026
  • LiteGS Public

    A refactored codebase for Gaussian Splatting. Training 3DGS in 50 seconds!

    MooreThreads/LiteGS’s past year of commit activity
    Cuda 331 29 4 0 Updated Feb 6, 2026
  • torch_musa Public

    torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.

    MooreThreads/torch_musa’s past year of commit activity
    Python 477 34 23 0 Updated Feb 6, 2026
  • MT-TransformerEngine Public Forked from NVIDIA/TransformerEngine

    A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.

    MooreThreads/MT-TransformerEngine’s past year of commit activity
    Python 9 Apache-2.0 638 0 1 Updated Feb 5, 2026
  • MT-MegatronLM Public
    MooreThreads/MT-MegatronLM’s past year of commit activity
    Python 10 2 0 0 Updated Feb 5, 2026
  • pytorch3d Public Forked from facebookresearch/pytorch3d

    PyTorch3D is FAIR's library of reusable components for deep learning with 3D data

    MooreThreads/pytorch3d’s past year of commit activity
    Python 2 1,453 0 0 Updated Feb 5, 2026
  • tilelang_musa Public Forked from tile-ai/tilelang

    Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels

    MooreThreads/tilelang_musa’s past year of commit activity
    C++ 14 443 0 0 Updated Jan 30, 2026
  • mutlass Public Forked from NVIDIA/cutlass

    MUSA Templates for Linear Algebra Subroutines

    MooreThreads/mutlass’s past year of commit activity
    C++ 42 1,687 1 0 Updated Jan 30, 2026

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Most used topics

Loading…