简体中文 | English
Lightweight AI Inference Engine — All-in-one Solution for Text, Vision, Speech, and OCR
aha is a high-performance, cross-platform AI inference engine built with Rust and the Candle framework. It brings state-of-the-art AI models to your local machine—no API keys, no cloud dependencies, just pure, fast AI running directly on your hardware.
- Added Qwen3-ASR speech recognition model
- Added CLI
listsubcommand to show supported models - Added CLI subcommand structure support (
cli,serv,download,run) - Fixed Qwen3VL thinking startswith bug
- Fixed
aha runmultiple inputs bug
- Added Qwen3 text model support
- Added Fun-ASR-Nano-2512 speech recognition model
- Fixed ModelScope Fun-ASR-Nano model load error
- Updated audio resampling with rubato
- Added GLM-ASR-Nano-2512 speech recognition model
- Merged Metal (GPU) support for Apple Silicon
- Added dynamic home directory and model download script
git clone https://github.com/jhqxxx/aha.git
cd aha
cargo build --releaseOptional Features:
# CUDA (NVIDIA GPU acceleration)
cargo build --release --features cuda
# Metal (Apple GPU acceleration for macOS)
cargo build --release --features metal
# Flash Attention (faster inference)
cargo build --release --features cuda,flash-attn
# FFmpeg (multimedia processing)
cargo build --release --features ffmpeg# List all supported models
aha list
# Download model only
aha download -m qwen3asr-0.6b
# Download model and start service
aha -m qwen3asr-0.6b
# Run inference directly (without starting service)
aha run -m qwen3asr-0.6b -i "audio.wav"
# Start service only (model already downloaded)
aha serv -m qwen3asr-0.6b -p 10100
aha serv -m qwen3-0.6b -p 10100Then use the unified (OpenAI-compatible) API:
curl http://localhost:10100/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3-0.6b",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}
'| Category | Models |
|---|---|
| Text | Qwen3, MiniCPM4 |
| Vision | Qwen2.5-VL, Qwen3-VL |
| OCR | DeepSeek-OCR, Hunyuan-OCR, PaddleOCR-VL |
| ASR | GLM-ASR-Nano, Fun-ASR-Nano, Qwen3-ASR |
| Audio | VoxCPM, VoxCPM1.5 |
| Image | RMBG-2.0 (background removal) |
| Document | Description |
|---|---|
| Getting Started | First steps with aha |
| Installation | Detailed installation guide |
| CLI Reference | Command-line interface |
| API Documentation | Library & REST API |
| Supported Models | Available AI models |
| Concepts | Architecture & design |
| Development | Contributing guide |
| Changelog | Version history |
- 🚀 High-Performance Inference - Powered by Candle framework for efficient tensor computation and model inference
- 🔧 Unified Interface — One tool for text, vision, speech, and OCR
- 📦 Local-First — All processing runs locally, no data leaves your machine
- 🎯 Cross-Platform — Works on Linux, macOS, and Windows
- ⚡ GPU Accelerated — Optional CUDA support for faster inference
- 🛡️ Memory Safe — Built with Rust for reliability
- 🧠 Attention Optimization - Optional Flash Attention support for optimized long sequence processing
cargo add aha
# VoxCPM example
use aha::models::voxcpm::generate::VoxCPMGenerate;
use aha::utils::audio_utils::save_wav;
use anyhow::Result;
fn main() -> Result<()> {
let model_path = "xxx/openbmb/VoxCPM-0.5B/";
let mut voxcpm_generate = VoxCPMGenerate::init(model_path, None, None)?;
let generate = voxcpm_generate.generate(
"The sun is shining bright, flowers smile at me, birds say early early early".to_string(),
None,
None,
2,
100,
10,
2.0,
false,
6.0,
)?;
let _ = save_wav(&generate, "voxcpm.wav")?;
Ok(())
}- Create new model file in src/models/
- Export in src/models/mod.rs
- Add support for CLI model inference in src/exec/
- Add tests and examples in tests/
- High-performance inference via Candle framework
- Multi-modal model support (vision, language, speech)
- Clean, easy-to-use API design
- Minimal dependencies, compact binaries
- Flash Attention support for long sequences
- FFmpeg support for multimedia processing
Apache-2.0 — See LICENSE for details.
- Candle - Excellent Rust ML framework
- All model authors and contributors
Built with ❤️ by the aha team
We're continuously expanding our model support. Contributions are welcome!
If this project helps you, please consider giving us a ⭐ Star!