Simplify your AI media generation workflows with Visionatrixβan intuitive interface built on top of ComfyUI
- π§ Easy Setup & Updates: Quick setup with simple installation and seamless version updates.
- π₯οΈ Minimalistic UI: Clean, user-friendly interface designed for daily workflow usage.
- π Prompt Translation Support: Automatically translate prompts for media generation.
- π οΈ Stable Workflows: Versioned and upgradable workflows.
- π Scalability: Run multiple instances with simultaneous task workers for increased productivity.
- π₯ Multi-User Support: Configure for multiple users with ease and integrate different user backends.
- π€ LLM Integration: Effortlessly incorporate Ollama/Gemini as your LLM for ComfyUI workflows.
- π Seamless Integration: Run as a service with backend endpoints for smooth project integration.
- π LoRA Integration: Easy integrate LoRAs from CivitAI into your flows.
- π³ Docker Compose: Official Docker images and a pre-configured Docker Compose file.
Access the Visionatrix UI at http://localhost:8288 (default).
Note: Starting from version 1.10 Visionatrix launches ComfyUI webserver at http://127.0.0.1:8188
We provide public template for RunPOD to help you quickly see if this project fits your needs.
- Python
3.10or higher. (3.12recommended) - GPU with at least
8 GBof memory (12GB recommended)
Install prerequisites (Python, Git, etc.)
For Ubuntu 22.04:
sudo apt install wget curl python3-venv python3-pip build-essential gitIt is also recommended to install FFMpeg dependencies with:
sudo apt install ffmpeg libsm6 libxext6Download and run the easy_install.py script:
Note: This script will clone the Visionatrix repository into your current folder and perform the installation. After installation, you can always run
easy_installfrom the "scripts" folder.
Using wget:
wget -O easy_install.py https://raw.githubusercontent.com/Visionatrix/Visionatrix/main/scripts/easy_install.py && python3 easy_install.pyUsing curl:
curl -o easy_install.py https://raw.githubusercontent.com/Visionatrix/Visionatrix/main/scripts/easy_install.py && python3 easy_install.pyFollow the prompts during installation. In most cases, everything should work smoothly.
To launch Visionatrix from the activated virtual environment:
python -m visionatrix run --uiWe offer a portable version to simplify installation (no need for Git or Visual Studio compilers).
Currently, we provide versions for CUDA/CPU. If there's demand, we can add a DirectML version.
- Install VC++ Redistributable: vc_redist.x64.exe from this Microsoft page.
- Download: Visit our Releases page.
- Get the Portable Archive: Download
vix_portable_cuda.7z. - Unpack and Run: Extract the archive and run
run_nvidia_gpu.batorrun_cpu.bat.
For manual installation steps, please refer to our detailed documentation.
The easiest way to set up paths is through the user interface, by going to Settings->ComfyUI.
In most cases, the easiest way is to set ComfyUI base data folder to some absolute path where you want to store models, task results, and settings.
This will allow you to freely reinstall everything from scratch without losing data or models.
Note: For easy Windows portable upgrades, we assume you have
ComfyUI base data folderparameter set.
We highly recommend filling in both the CivitAI token and the HuggingFace token in the settings.
Many models cannot be downloaded by public users without a token.
Run the easy_install script and select the "Update" option.
python3 easy_install.pyUpdating the portable version involves:
- Unpacking the new portable version.
- Moving
visionatrix.dbfrom the old version to the new one.
Hint
Alternatively, you can specify a custom path for visionatrix.db using the DATABASE_URI environment variable. This allows you to keep the database file outside the portable archive and skip step 2.
For example, setting DATABASE_URI to:
`sqlite+aiosqlite:///C:/Users/alex/visionatrix.db`
will direct Visionatrix to use the C:\Users\alex\visionatrix.db file.
We provide official Docker images along with a pre-configured docker-compose.yml file, making deployment faster and easier. The file is located at the root of the Visionatrix repository.
Our Docker images are primarily hosted on GitHub Container Registry (GHCR): ghcr.io/visionatrix/visionatrix. This is the default used by the docker-compose.yml file.
For users who experience slow download speeds from GHCR (e.g., on certain cloud providers), we also provide a mirror on Docker Hub: docker.io/bigcat88/visionatrix.
- visionatrix_nvidia: Visionatrix with
NVIDIA GPUsupport. - visionatrix_amd: Visionatrix with
AMD GPUsupport. - visionatrix_cpu: Visionatrix running on
CPUonly. - pgsql: A PostgreSQL 17 container for the database.
Choose the service appropriate for your hardware:
-
For NVIDIA GPU support:
docker compose up -d visionatrix_nvidia
-
For AMD GPU support:
docker compose up -d visionatrix_amd
-
For CPU mode:
docker compose up -d visionatrix_cpu
By default, these commands will pull images from GHCR. A visionatrix-data directory will be created in the current directory in the host and used for the models, user, input and output files.
You can easily customize the configuration by modifying environment variables or volume mounts in the docker-compose.yml file.
If you prefer to pull images from Docker Hub instead of GHCR, you can set the VIX_IMAGE_BASE environment variable before running docker compose up.
Method 1: Using a .env file
-
Create a file named
.envin the same directory as yourdocker-compose.ymlfile. -
Add the following line to the
.envfile:VIX_IMAGE_BASE=docker.io/bigcat88/visionatrix
-
Now, run
docker compose upas usual. Compose will automatically read the.envfile and use the Docker Hub images.# Example: Start NVIDIA service using images from Docker Hub defined in .env docker compose up -d visionatrix_nvidia
Method 2: Setting the variable temporarily
You can set the environment variable directly on the command line for a single command execution:
VIX_IMAGE_BASE=docker.io/bigcat88/visionatrix docker compose up -d visionatrix_nvidiaIf you have any questions or need assistance, we're here to help! Feel free to start a discussion or explore our resources:
- Documentation
- Available Flows
- Admin Manual
- Flows Developing
- Common Information