Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs.
Startups such as Styrk AI, research teams like Hazy Research at Stanford, and large companies like AMD use Lemonade to run LLMs.
- Install: Windows · Ubuntu · Source
- Get Models: Browse and download with the Model Manager
- Chat: Try models with the built-in chat interface
- Connect: Use Lemonade with your favorite apps:
Want your app featured here? Discord · GitHub Issue · Email
To run and chat with Gemma 3:
lemonade-server run Gemma-3-4b-it-GGUF
To install models ahead of time, use the pull command:
lemonade-server pull Gemma-3-4b-it-GGUF
To check all models available, use the list command:
lemonade-server list
Tip: You can use
--llamacpp vulkan/rocmto select a backend when running GGUF models.
Lemonade supports GGUF, FLM, and ONNX models across CPU, GPU, and NPU (see supported configurations).
Use lemonade-server pull or the built-in Model Manager to download models. You can also import custom GGUF/ONNX models from Hugging Face.
Lemonade supports the following configurations, while also making it easy to switch between them at runtime. Find more information about it here.
| Hardware | Engine: OGA | Engine: llamacpp | Engine: FLM | Windows | Linux |
|---|---|---|---|---|---|
| 🧠 CPU | All platforms | All platforms | - | ✅ | ✅ |
| 🎮 GPU | — | Vulkan: All platforms ROCm: Selected AMD platforms* Metal: Apple Silicon |
— | ✅ | ✅ |
| 🤖 NPU | AMD Ryzen™ AI 300 series | — | Ryzen™ AI 300 series | ✅ | — |
* See supported AMD ROCm platforms
| Architecture | Platform Support | GPU Models |
|---|---|---|
| gfx1151 (STX Halo) | Windows, Ubuntu | Ryzen AI MAX+ Pro 395 |
| gfx120X (RDNA4) | Windows, Ubuntu | Radeon AI PRO R9700, RX 9070 XT/GRE/9070, RX 9060 XT |
| gfx110X (RDNA3) | Windows, Ubuntu | Radeon PRO W7900/W7800/W7700/V710, RX 7900 XTX/XT/GRE, RX 7800 XT, RX 7700 XT |
| Under Development | Under Consideration | Recently Completed |
|---|---|---|
| Image Generation | vLLM support | General speech-to-text support (whisper.cpp) |
| Add imagegen and transcription to app | Handheld devices: Ryzen AI Z2 Extreme APUs | Multiple models loaded at the same time |
| ROCm support for Ryzen AI 360-375 (Strix) APUs | Text to speech | Lemonade desktop app |
You can use any OpenAI-compatible client library by configuring it to use http://localhost:8000/api/v1 as the base URL. A table containing official and popular OpenAI clients on different languages is shown below.
Feel free to pick and choose your preferred language.
| Python | C++ | Java | C# | Node.js | Go | Ruby | Rust | PHP |
|---|---|---|---|---|---|---|---|---|
| openai-python | openai-cpp | openai-java | openai-dotnet | openai-node | go-openai | ruby-openai | async-openai | openai-php |
from openai import OpenAI
# Initialize the client to use Lemonade Server
client = OpenAI(
base_url="http://localhost:8000/api/v1",
api_key="lemonade" # required but unused
)
# Create a chat completion
completion = client.chat.completions.create(
model="Llama-3.2-1B-Instruct-Hybrid", # or any other available model
messages=[
{"role": "user", "content": "What is the capital of France?"}
]
)
# Print the response
print(completion.choices[0].message.content)For more detailed integration instructions, see the Integration Guide.
The Lemonade Python SDK is also available, which includes the following components:
- 🐍 Lemonade Python API: High-level Python API to directly integrate Lemonade LLMs into Python applications.
- 🖥️ Lemonade CLI: The
lemonadeCLI lets you mix-and-match LLMs (ONNX, GGUF, SafeTensors) with prompting templates, accuracy testing, performance benchmarking, and memory profiling to characterize your models on your hardware.
To read our frequently asked questions, see our FAQ Guide
We are actively seeking collaborators from across the industry. If you would like to contribute to this project, please check out our contribution guide.
New contributors can find beginner-friendly issues tagged with "Good First Issue" to get started.
This project is sponsored by AMD. It is maintained by @danielholanda @jeremyfowers @ramkrishna @vgodsoe in equal measure. You can reach us by filing an issue, emailing lemonade@amd.com, or joining our Discord.
This project is:
- Built with C++ (server) and Python (SDK) with ❤️ for the open source community,
- Standing on the shoulders of great tools from:
- Accelerated by mentorship from the OCV Catalyst program.
- Licensed under the Apache 2.0 License.
- Portions of the project are licensed as described in NOTICE.md.