⭐⭐⭐ Kalavai platform is open source, and free to use in both commercial and non-commercial purposes. If you find it useful, consider supporting us by giving a star to our GitHub project, joining our discord channel and follow our Substack.
Kalavai is an open source platform that unlocks computing from spare capacity. It aggregates resources from multiple sources to increase your computing budget and run large AI workloads.
Kalavai's goal is to make using AI workloads in real applications accessible and affordable to all.
- Increase GPU utilisation from your devices (fractional GPU).
- Multi-node, multi-GPU and multi-architecture support (AMD and NVIDIA).
- Aggregate computing resources from multiple sources: home desktops, work computers, multi cloud VMs, raspberry pi's, Mac, etc.
- Ready-made templates to deploy common AI building blocks: model inference (vLLM, llama.cpp, SGLang), GPU clusters (Ray, GPUStack), automation workflows (n8n and Flowise), evaluation and monitoring tools (Langfuse), production dev tools (LiteLLM, OpenWebUI) and more.
- Easy to expand to custom workloads
Details
Powered by Kalavai
- CoGen AI: A community hosted alternative to OpenAI API for unlimited inference.
- Create your own Free Cursor/Windsurf Clone
- November: Kalavai is now opening a managed service to create and manage AI workloads on a fleet of GPUs. We are inviting Beta Testers for early access. If you are interested Apply here
- September: Kalavai now supports Ray clusters for massively distributed ML.
- August 2025: Added support for AMD GPUs (experimental)
- July 2025: Added support for GPUStack clusters for managed LLM deployments (experimental).
- June 2025: Native support for Mac and Raspberry pi devices (ARM).
- May 2025: Added support for diffusion pipelines (experimental)
- April 2025: Added support for workflow automation engines n8n and Flowise (experimental)
- March 2025: Added support for AI Gateway LiteLLM
More news
- 20 February 2025: New shiny GUI interface to control LLM pools and deploy models- 31 January 2025:
kalavai-clientis now a PyPI package, easier to install than ever! - 27 January 2025: Support for accessing pools from remote computers
- 9 January 2025: Added support for SGLang models
- 9 January 2025: Added support for vLLM models
- 9 January 2025: Added support for llama.cpp models
- 24 December 2024: Release of public BOINC pool to donate computing to scientific projects
- 23 December 2024: Release of public petals swarm
- 24 November 2024: Common pools with private user spaces
We currently support out of the box the following AI engines:
- vLLM: most popular GPU-based model inference.
- llama.cpp: CPU-based GGUF model inference.
- SGLang: Super fast GPU-based model inference.
- n8n (experimental): no-code workload automation framework.
- Flowise (experimental): no-code agentic AI workload framework.
- Speaches: audio (speech-to-text and text-to-speech) model inference.
- Langfuse (experimental): open source evaluation and monitoring GenAI framework.
- OpenWebUI: ChatGPT-like UI playground to interface with any models.
- diffusers (experimental)
- RayServe inference.
- GPUstack (experimental)
Not what you were looking for? Tell us what engines you'd like to see.
Kalavai is at an early stage of its development. We encourage people to use it and give us feedback! Although we are trying to minimise breaking changes, these may occur until we have a stable version (v1.0).
- Get a free Kalavai account and access unlimited AI.
- Full documentation for the project.
- Join our Substack for updates and be part of our community
- Join our discord community
The kalavai-client is the main tool to interact with the Kalavai platform, to create and manage both local and public pools and also to interact with them (e.g. deploy models).
Requirements
For seed nodes:
- A 64 bits x86 based Linux machine (laptop, desktop or VM)
- Docker engine installed with privilege access.
For workers sharing resources with the pool:
- A laptop, desktop or Virtual Machine. Full support: Linux and Windows; x86 architecture. Limited support: Mac and ARM architecture.
- If self-hosting, workers should be on the same network as the seed node. Looking for over-the-internet connectivity? Check out our managed seeds
- Docker engine installed (for linux, Windows and MacOS) with privilege access.
If your system is not currently supported, open an issue and request it. We are expanding this list constantly.
The client is a python package and can be installed with one command:
pip install kalavai-clientYou can create and manage your pools with the kalavai GUI or the Command Line Interface (CLI). For a quick start, get a pool going with:
kalavai pool startAnd then start the GUI:
kalavai gui startThis will expose the GUI and the backend services in localhost. By default, the GUI is accessible via http://localhost:49153.
Check out our getting started guide for next steps on how to add more workers to your pool, or use our managed seeds service for over-the-internet AI pools.
Check out our use cases documentation for inspiration on what you can do with Kalavai:
- Multi-GPU LLM
- Fine tune
- Autoscaling deployments
- BYO Model Gateway
- Easy LLMs with GPUstack
- Production GPU fleets
Anything missing here? Give us a shout in the discussion board. We welcome discussions, feature requests, issues and PRs!
- Join the community and share ideas!
- Report bugs, issues and new features.
- Help improve our compatibility matrix by testing on different operative systems.
- Follow our Substack channel for news, guides and more.
- Community integrations are template jobs built by Kalavai and the community that makes deploying distributed workflows easy for users. Anyone can extend them and contribute to the repo.
Details
You must store your Docker Hub username and the token you just created as secrets in your GitHub repository:
-
Go to your GitHub repository.
-
Navigate to Settings > Security > Secrets and variables > Actions.
-
Click New repository secret.
-
Create the following two secrets:
Name: DOCKER_HUB_USERNAME
Value: Your Docker Hub username or organization name.
Name: DOCKER_HUB_TOKEN
Value: The Personal Access Token you copied from Docker Hub.
Expand
Python version >= 3.10.
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10 python3.10-dev python3-virtualenv python3-venv
virtualenv -p python3.10 env
source env/bin/activate
sudo apt install python3.10-venv python3.10-dev -y
pip install -U setuptools
pip install -e .[dev]Build python wheels:
bash publish.sh buildTo run the unit tests, use:
python -m unittest