Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
-
Updated
Nov 2, 2025 - Python
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Python bindings for MPI
Best practices & guides on how to write distributed pytorch training code
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python ⚡
DeepHyper: A Python Package for Massively Parallel Hyperparameter Optimization in Machine Learning
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.
Simplify HPC and Batch workloads on Azure
Distributed and decentralized training framework for PyTorch over graph
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.
A common interface to processing pools.
Analysis kit for large-scale structure datasets, the massively parallel way
Perun is a Python package that measures the energy consumption of your applications.
DISROPT: A Python framework for distributed optimization
Geophysical Bayesian Inference in Python. Docs:
OpenClimateGIS is a set of geoprocessing and calculation tools for CF-compliant climate datasets.
Efficient and scalable parallelism using the message passing interface (MPI) to handle big data and highly computational problems.
🌊 Framework for studying fluid dynamics with numerical simulations using Python (publish-only mirror). The main repo is hosted on https://foss.heptapod.net (Gitlab fork supporting Mercurial).
Code for tutorials and examples
Add a description, image, and links to the mpi topic page so that developers can more easily learn about it.
To associate your repository with the mpi topic, visit your repo's landing page and select "manage topics."