Run any AI model locally with enterprise-grade performance and privacy
Inferno is a production-ready AI inference server that runs entirely on your hardware. Think of it as your private ChatGPT that works offline, supports any model format, and gives you complete control over your AI infrastructure.
- 100% Local: All processing happens on your hardware
- No Cloud Dependency: Works completely offline
- Your Data Stays Yours: Zero telemetry or external data transmission
- GGUF Models: Native support for Llama, Mistral, CodeLlama, and more
- ONNX Models: Run models from PyTorch, TensorFlow, scikit-learn
- Format Conversion: Convert between GGUF β ONNX β PyTorch β SafeTensors
- Auto-Optimization: Automatic quantization and hardware optimization
- GPU Acceleration: NVIDIA, AMD, Apple Silicon, Intel support
- Smart Caching: Remember previous responses for instant results
- Batch Processing: Handle thousands of requests efficiently
- Load Balancing: Distribute work across multiple models/GPUs
- OpenAI-Compatible API: Drop-in replacement for ChatGPT API
- REST & WebSocket: Standard APIs plus real-time streaming
- Multiple Languages: Python, JavaScript, Rust, cURL examples
- Docker Ready: One-command deployment
- Smart CLI: Typo detection, helpful error messages, setup guidance
Choose your preferred installation method:
- Visit Releases
- Download
inferno-universal-vX.X.X.dmg(universal binary for Intel & Apple Silicon) - Open the DMG file and drag Inferno to Applications
- Launch from Applications or use
infernocommand in Terminal
# Add tap and install
brew tap ringo380/tap
brew install inferno
# Or directly
brew install ringo380/tap/inferno
# Start as service
brew services start infernocurl -sSL https://github.com/ringo380/inferno/releases/latest/download/install-inferno.sh | bash# Pull the latest image
docker pull ghcr.io/ringo380/inferno:latest
# Run with GPU support
docker run --gpus all -p 8080:8080 ghcr.io/ringo380/inferno:latest
# With custom models directory
docker run -v /path/to/models:/home/inferno/.inferno/models \
-p 8080:8080 ghcr.io/ringo380/inferno:latestversion: '3.8'
services:
inferno:
image: ghcr.io/ringo380/inferno:latest
ports:
- "8080:8080"
volumes:
- ./models:/home/inferno/.inferno/models
- ./config:/home/inferno/.inferno/config
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]# From crates.io
cargo install inferno
# From GitHub Packages
cargo install --registry github inferno# From GitHub Packages
npm install @ringo380/inferno-desktop
# From npm registry
npm install inferno-desktop# Download for your architecture
wget https://github.com/ringo380/inferno/releases/latest/download/inferno-linux-x86_64
# or
wget https://github.com/ringo380/inferno/releases/latest/download/inferno-linux-aarch64
# Make executable and move to PATH
chmod +x inferno-linux-*
sudo mv inferno-linux-* /usr/local/bin/inferno- Download
inferno-windows-x86_64.exefrom Releases - Add to your PATH or run directly
cargo install inferno# Clone the repository
git clone https://github.com/ringo380/inferno.git
cd inferno
# Build release binary
cargo build --release
# Install globally (optional)
cargo install --path .
# Build desktop app (optional)
cd desktop-app && npm install && npm run buildinferno upgrade check # Check for updates
inferno upgrade install # Install latest version# Homebrew
brew upgrade inferno
# Docker
docker pull ghcr.io/ringo380/inferno:latest
# Cargo
cargo install inferno --force
# NPM
npm update @ringo380/inferno-desktopNote: DMG and installer packages automatically detect existing installations and preserve your settings during upgrade.
# Check version
inferno --version
# Verify GPU support
inferno gpu status
# Run health check
inferno doctor# List available models
inferno models list
# Run inference
inferno run --model MODEL_NAME --prompt "Your prompt here"
# Start HTTP API server
inferno serve
# Launch terminal UI
inferno tui
# Launch desktop app (if installed from DMG)
open /Applications/Inferno.app- β Real GGUF Support: Full llama.cpp integration
- β Real ONNX Support: Production ONNX Runtime with GPU acceleration
- β Model Conversion: Real-time format conversion with optimization
- β Quantization: Q4_0, Q4_1, Q5_0, Q5_1, Q8_0, F16, F32 support
- β Authentication: JWT tokens, API keys, role-based access
- β Monitoring: Prometheus metrics, OpenTelemetry tracing
- β Audit Logging: Encrypted logs with multi-channel alerting
- β Batch Processing: Cron scheduling, retry logic, job dependencies
- β Caching: Multi-tier caching with compression and persistence
- β Load Balancing: Distribute inference across multiple backends
- β OpenAI Compatible: Use existing ChatGPT client libraries
- β REST API: Standard HTTP endpoints for all operations
- β WebSocket: Real-time streaming and bidirectional communication
- β CLI Interface: 40+ commands for all AI/ML operations
- β Desktop App: Cross-platform Tauri application
Built with a modular, trait-based architecture supporting pluggable backends:
src/
βββ main.rs # CLI entry point
βββ lib.rs # Library exports
βββ config.rs # Configuration management
βββ backends/ # AI model execution backends
βββ cli/ # 40+ CLI command modules
βββ api/ # HTTP/WebSocket APIs
βββ batch/ # Batch processing system
βββ models/ # Model discovery and metadata
βββ [Enterprise] # Advanced production features
Create inferno.toml:
# Basic settings
models_dir = "/path/to/models"
log_level = "info"
[server]
bind_address = "0.0.0.0"
port = 8080
[backend_config]
gpu_enabled = true
context_size = 4096
batch_size = 64
[cache]
enabled = true
compression = "zstd"
max_size_gb = 10See CLAUDE.md for comprehensive development documentation.
# Run tests
cargo test
# Format code
cargo fmt
# Run linter
cargo clippy
# Full verification
./verify.shLicensed under either of:
- Apache License, Version 2.0
- MIT License
π₯ Ready to take control of your AI infrastructure? π₯
Built with β€οΈ by the open source community