GpuGrid enables users to easily run containerized AI workloads on a decentralized GPU network, where anyone can get paid by connecting compute nodes to the network and running container jobs. Users can easily run jobs such as Stable Diffusion XL and cutting-edge open source LLMs, supporting on-chain execution, CLI execution, and execution via GpuGrid AI Studio on the web.
Visit the GpuGrid documentation site for a more comprehensive getting started overview, including the Quick Start Guide.
Jobs (containers) can be run on GpuGrid using the installable CLI, or installed via the Go toolchain. After setting up the necessary prerequisites, the CLI enables users to run jobs as follows:
grid run cowsay:v0.0.4 -i Message="moo"
The current list of modules can be found in the following repositories:
Containerized job modules can be built and added to the available module list; more details can be found in the Building Jobs documentation. If you'd like to contribute, please open a pull request on this repository to add your link to the list above.
As a distributed network, GpuGrid also brings the ability to run as a node and contribute GPU and compute power. See the Running Nodes documentation for more detailed instructions and an overview of the setup.
We benchmark the solver job matching algorithm at: https://gpugrid-io.github.io/gpugrid/dev/bench/. More details can be found in our Benchmarking Guide.