🤖 Docker-based tool for managing Ollama models with backup and compose generation.
OMO (Oh-My-Ollama / Ollama Model Organizer) is a Docker-based shell script for managing Ollama models. It provides essential model lifecycle operations including download, backup, restore, and removal, plus generates Docker Compose configurations for production deployments.
- Download Models: Install models from Ollama registry or HuggingFace repositories (only GGUF)
- List Models: Display installed models with detailed information
- Remove Models: Delete models individually or in batch
- Model Verification: Check model integrity after operations
- Complete Backup: Full model backup including manifests and blob files
- Integrity Verification: MD5 checksum validation for backup integrity
- Auto-Restore:
--installautomatically checks and restores from backup before downloading - Manual Restore: Direct restore from backup (use only when needed)
- Batch Operations: Backup or restore multiple models
- Compose Generation: Create Docker Compose configurations
- Multi-Service Stack: Integrated Ollama, One-API, and Open-WebUI services
- GPU Support: Automatic NVIDIA GPU detection and configuration
- Production Ready: Proper volume management and networking
- Docker: Required for all operations
- nvidia gpu: for GPU acceleration
- Clone the repository:
git clone https://github.com/LaiQE/omo.git
cd omo- Make the script executable:
chmod +x omo.sh- Edit the models list file:
# Edit models.list to add your desired models
# The repository includes a template with examples
vim models.list# Show help
./omo.sh --help
# Install missing models from models.list
./omo.sh --install
# List installed models
./omo.sh --list
# Backup a specific model
./omo.sh --backup ollama:qwen2.5:7b-instruct
# Backup all models
./omo.sh --backup-all
# Restore from backup (manual, not recommended)
# Note: --install automatically restores from backup if available
./omo.sh --restore qwen2.5_7b-instruct
# Remove a model
./omo.sh --remove deepseek-r1:1.5b
# Remove all models
./omo.sh --remove-all
# Generate Docker Compose
./omo.sh --generate-composeCreate a models.list file with one model per line:
# Ollama registry models
ollama:deepseek-r1:1.5b
ollama:llama3.2:3b
ollama:qwen2.5:7b-instruct
# HuggingFace GGUF models
hf-gguf:hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF:latest
hf-gguf:hf.co/MaziyarPanahi/gemma-3-1b-it-GGUF:q4_0
# Comments and empty lines are ignored
# Add your own models here
| Format | Description | Example |
|---|---|---|
ollama:model:tag |
Official Ollama models | ollama:llama3.2:3b |
hf-gguf:repo:file |
HuggingFace GGUF models | hf-gguf:hf.co/author/model:file |
omo/
├── omo.sh # Main script
├── models.list # Model configuration
├── ollama/ # Ollama data directory
│ └── models/ # Model storage
├── backups/ # Model backups
└── docker-compose.yml # Generated compose file (optional)
| Option | Description |
|---|---|
--install |
Download missing models |
--check-only |
Check status only (default) |
--force-download |
Force re-download all models |
--list |
List installed models |
--backup MODEL |
Backup specific model |
--backup-all |
Backup all models |
--restore FILE |
Restore from backup |
--remove MODEL |
Remove specific model |
--remove-all |
Remove all models |
--generate-compose |
Generate docker-compose.yml |
| Option | Description | Default |
|---|---|---|
--models-file FILE |
Model list file | ./models.list |
--ollama-dir DIR |
Ollama data directory | ./ollama |
--backup-dir DIR |
Backup directory | ./backups |
--verbose |
Enable verbose logging | - |
--force |
Skip confirmations | - |
Generate a complete Docker stack:
./omo.sh --generate-composeThis creates a docker-compose.yml with:
| Service | Port | Description |
|---|---|---|
| Ollama | 11434 | Model runtime with GPU support |
| One-API | 3000 | API gateway and management |
| Open-WebUI | 3001 | Web interface for chat |
External References:
- Ollama - AI model runtime
- One-API - OpenAI API gateway
- Open-WebUI - Web interface
# Enable verbose logging
export VERBOSE="true"
# Use custom directories
./omo.sh --ollama-dir /custom/ollama --backup-dir /custom/backups# 1. Check what models need to be downloaded
./omo.sh --check-only
# 2. Install missing models
./omo.sh --install
# 3. List all installed models
./omo.sh --list
# 4. Backup important models
./omo.sh --backup ollama:qwen2.5:7b-instruct
# 5. Generate Docker Compose for deployment
./omo.sh --generate-compose
# 6. Start the stack
docker-compose up -d# Recommended workflow: Backup all models
./omo.sh --backup-all
# Install will automatically restore from backup if available
# This is the recommended way - no manual restore needed
./omo.sh --install
# Manual restore (only use when auto-restore doesn't work)
./omo.sh --restore qwen2.5_7b-instruct
# Force restore (overwrite existing)
./omo.sh --force --restore qwen2.5_7b-instructOMO includes basic error handling for:
- Network Issues: Clear error messages for download failures
- Docker Problems: Docker daemon status verification
- File Integrity: MD5 checksum validation for backups
- Model Conflicts: Confirmation prompts for destructive operations
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Author: Chain Lai
Repository: https://github.com/LaiQE/omo