Basic Tutorial for Training with Multiple GPU in Tensorflow
-
Updated
Oct 7, 2018 - Jupyter Notebook
Basic Tutorial for Training with Multiple GPU in Tensorflow
performance test of MNIST hand writings usign MXNet + TF
Benchmarks for Multi-GPU Communication with MVAPICH2
Repository for ICS'25: MG-𝛼GCD: Accelerating Graph Community Detection on Multi-GPU Platforms
TransCorpus is a scalable toolkit for large-scale, parallel translation and preprocessing of text corpora, built for language model pretraining and research.
Train an object classifier using multiple gpus in Torch7
Masked Language Modeling with BERT in Multi-GPU Settings
Glint is a Rust framework designed for creating stateful, graph-based AI systems, enabling efficient multi-step workflows. With features like LLM integration and a graph-based architecture, Glint helps developers build powerful AI solutions with ease. 🐙✨
Gpu accelerated neural network trainer that supports multiple GPUs with OpenCL.
Multi-GPU calculations using CUDA technology on the example of a planar problem of linear elasticity theory
Simple load-balancing library for balancing GPGPU workloads between a GPU and a CPU or any number of devices in a computer or multiple computers.
Testing speed of two GPU's vs. one GPU
GPU-accelerated linear solvers based on the conjugate gradient (CG) method, supporting NVIDIA and AMD GPUs with GPU-aware MPI, NCCL, RCCL or NVSHMEM
Neural Network C is an advanced neural network implementation in pure C, optimized for high performance on CPUs and NVIDIA GPUs.
Sabanci University CS406 Group Project Parallel Computing Cycle Count of length k in Sparse Matrix
Deep neural network with multi-GPU support in a minimal fashion
Simple Multi-GPU Implementation of ResNet in Tensorflow
Add a description, image, and links to the multi-gpu topic page so that developers can more easily learn about it.
To associate your repository with the multi-gpu topic, visit your repo's landing page and select "manage topics."