A Nix flake for installing and running ComfyUI with Python 3.12. Supports both macOS (Intel/Apple Silicon) and Linux with automatic GPU detection.
Note: Pull requests are more than welcome! Contributions to this open project are appreciated.
nix run github:utensils/comfyui-nix -- --open- Provides ComfyUI packaged with Python 3.12
- Reproducible environment through Nix flakes
- Hybrid approach: Nix for environment management, pip for Python dependencies
- Cross-platform support: macOS (Intel/Apple Silicon) and Linux
- Automatic GPU detection: CUDA on Linux, MPS on Apple Silicon
- Configurable CUDA version via
CUDA_VERSIONenvironment variable - Persistent user data directory with automatic version upgrades
- Includes ComfyUI-Manager for easy extension installation
- Improved model download experience with automatic backend downloads
- Code quality tooling: ruff (linter/formatter), pyright (type checker), shellcheck
- CI validation with
nix flake check(build, lint, type-check, shellcheck, nixfmt) - Built-in formatters:
nix fmt(Nix files),nix run .#format(Python files) - Overlay for easy integration with other flakes
# Run a specific version using a commit hash
nix run github:utensils/comfyui-nix/[commit-hash] -- --open--open: Automatically opens ComfyUI in your browser when the server is ready--port=XXXX: Run ComfyUI on a specific port (default: 8188)--base-directory PATH: Set data directory for models, input, output, and custom_nodes (default:~/.config/comfy-ui). Quote paths with spaces:--base-directory "/path/with spaces"--debugor--verbose: Enable detailed debug logging
CUDA_VERSION: CUDA version for PyTorch (default:cu124, options:cu118,cu121,cu124,cpu)
# Example: Use CUDA 12.1
CUDA_VERSION=cu121 nix run github:utensils/comfyui-nix
# Example: Use custom data directory (e.g., on a separate drive)
nix run github:utensils/comfyui-nix -- --base-directory ~/AI# Enter a development shell with all dependencies
nix developThe development shell includes: Python 3.12, git, shellcheck, shfmt, nixfmt, ruff, pyright, jq, and curl.
# Format Nix files
nix fmt
# Format Python code
nix run .#format
# Lint Python code
nix run .#lint
# Type check Python code
nix run .#type-check
# Run all checks (build, lint, type-check, shellcheck, nixfmt)
nix flake check
# Check for ComfyUI updates
nix run .#updateYou can install ComfyUI to your profile:
nix profile install github:utensils/comfyui-nixThe flake is designed to be simple and extensible. You can customize it by:
- Adding Python packages in the
pythonEnvdefinition - Modifying the launcher script in
scripts/launcher.sh - Pinning to a specific ComfyUI version by changing the version variables at the top of
flake.nix
You can integrate this flake into your own Nix configuration using the overlay:
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
comfyui-nix.url = "github:utensils/comfyui-nix";
};
outputs = { self, nixpkgs, comfyui-nix }: {
# Use in NixOS configuration
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [
({ pkgs, ... }: {
nixpkgs.overlays = [ comfyui-nix.overlays.default ];
environment.systemPackages = [ pkgs.comfy-ui ];
})
];
};
};
}This flake uses a modular, multi-file approach for better maintainability:
flake.nix- Main flake definition and package configurationscripts/- Modular launcher scripts:launcher.sh- Main entry point that orchestrates the launching processconfig.sh- Configuration variables and settingslogger.sh- Logging utilities with support for different verbosity levelsinstall.sh- Installation and setup procedurespersistence.sh- Symlink creation and data persistence managementruntime.sh- Runtime execution and process management
This modular structure makes the codebase much easier to maintain, debug, and extend as features are added. Each script has a single responsibility, improving code organization and readability.
User data is stored in ~/.config/comfy-ui with the following structure:
app/- ComfyUI application code (auto-updated when flake changes)models/- Stable Diffusion models and other model filesoutput/- Generated images and other outputsuser/- User configuration and custom nodesinput/- Input files for processing
This structure ensures your models, outputs, and custom nodes persist between application updates.
- macOS 10.15+ (Intel or Apple Silicon)
- Nix package manager
- x86_64 Linux distribution
- Nix package manager
- NVIDIA GPU with drivers (optional, for CUDA acceleration)
- glibc 2.27+
- Uses stable PyTorch releases with MPS (Metal Performance Shaders) support
- Enables FP16 precision mode for better performance
- Sets optimal memory management parameters for macOS
- Automatic NVIDIA GPU detection and CUDA setup
- Configurable CUDA version (default: 12.4, supports 11.8, 12.1, 12.4)
- Automatic library path configuration for system libraries
- Falls back to CPU-only mode if no GPU is detected
The flake automatically detects your hardware and installs the appropriate PyTorch version:
- Linux with NVIDIA GPU: PyTorch with CUDA support (configurable via
CUDA_VERSION) - macOS with Apple Silicon: PyTorch with MPS acceleration
- Other systems: CPU-only PyTorch
This flake currently provides:
- ComfyUI v0.4.0
- Python 3.12
- PyTorch stable releases (with MPS support on Apple Silicon, CUDA on Linux)
- ComfyUI-Manager for extension management
- Frontend managed via ComfyUI's requirements.txt
To check for updates:
nix run .#updateThis flake includes a custom patch for the model downloading experience. Unlike the default ComfyUI implementation, our patch ensures that when models are selected in the UI, they are automatically downloaded in the background without requiring manual intervention. This significantly improves the user experience by eliminating the need to manually manage model downloads, especially for new users who may not be familiar with the process of obtaining and placing model files.
The codebase follows a modular structure under the src directory to improve maintainability and organization:
src/
├── custom_nodes/ # Custom node implementations
│ ├── model_downloader/ # Automatic model downloading functionality
│ │ ├── js/ # Frontend JavaScript components
│ │ └── ... # Backend implementation files
│ └── main.py # Entry point for custom nodes
├── patches/ # Runtime patches for ComfyUI
│ ├── custom_node_init.py # Custom node initialization
│ └── main.py # Entry point for patches
└── persistence/ # Data persistence implementation
├── persistence.py # Core persistence logic
└── main.py # Persistence entry point
-
custom_nodes: Contains custom node implementations that extend ComfyUI's functionality
- model_downloader: Provides automatic downloading of models when selected in the UI
- js: Frontend components for download status and progress reporting
- model_downloader_patch.py: Backend API endpoints for model downloading
- model_downloader: Provides automatic downloading of models when selected in the UI
-
patches: Contains runtime patches that modify ComfyUI's behavior
- custom_node_init.py: Initializes custom nodes and registers their API endpoints
- main.py: Coordinates the loading and application of patches
-
persistence: Manages data persistence across ComfyUI runs
- persistence.py: Creates and maintains the directory structure and symlinks
- main.py: Handles the persistence setup before launching ComfyUI
This structure ensures clear separation of concerns and makes the codebase easier to maintain and extend.
This flake includes Docker support for running ComfyUI in a containerized environment while preserving all functionality. Multi-architecture images are available for both x86_64 (amd64) and ARM64 (aarch64) platforms.
Pre-built Docker images are automatically published to GitHub Container Registry on every release. This is the easiest way to get started:
# Pull the latest CPU version (automatically selects correct architecture)
docker pull ghcr.io/utensils/comfyui-nix:latest
# Run the container
docker run -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latest# Pull the latest CUDA version
docker pull ghcr.io/utensils/comfyui-nix:latest-cuda
# Run with GPU support
docker run --gpus all -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latest-cudalatest- Latest CPU version, multi-arch (amd64 + arm64)latest-cuda- Latest CUDA version (x86_64/amd64 only)latest-amd64- Latest CPU version for x86_64latest-arm64- Latest CPU version for ARM64X.Y.Z- Specific version (CPU, multi-arch)X.Y.Z-cuda- Specific version (CUDA, x86_64 only)
Visit the packages page to see all available versions.
The latest and latest-arm64 tags work on Apple Silicon Macs via Docker Desktop:
# Works on Apple Silicon Macs
docker run -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latestImportant: Docker containers on macOS cannot access the Metal GPU (MPS). The Docker image runs CPU-only on Apple Silicon. For GPU acceleration on Apple Silicon, use nix run directly instead of Docker:
# For GPU acceleration on Apple Silicon, use nix directly (not Docker)
nix run github:utensils/comfyui-nixUse the included buildDocker command to create a Docker image:
# Build the Docker image
nix run .#buildDocker
# Or from remote
nix run github:utensils/comfyui-nix#buildDockerThis creates a Docker image named comfy-ui:latest in your local Docker daemon.
For Linux systems with NVIDIA GPUs, build the CUDA-enabled image:
# Build the CUDA-enabled Docker image
nix run .#buildDockerCuda
# Or from remote
nix run github:utensils/comfyui-nix#buildDockerCudaThis creates a Docker image named comfy-ui:cuda with GPU acceleration support.
Run the container with either the pre-built or locally-built image:
# Create a data directory for persistence
mkdir -p ./data
# Run pre-built image from GitHub Container Registry
docker run -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latest
# Or run locally-built image
docker run -p 8188:8188 -v "$PWD/data:/data" comfy-ui:latestFor GPU-accelerated execution:
# Create a data directory for persistence
mkdir -p ./data
# Run pre-built CUDA image from GitHub Container Registry
docker run --gpus all -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latest-cuda
# Or run locally-built CUDA image
docker run --gpus all -p 8188:8188 -v "$PWD/data:/data" comfy-ui:cudaRequirements for CUDA support:
- NVIDIA GPU with CUDA support
- NVIDIA drivers installed on the host system
nvidia-container-toolkitpackage installed- Docker configured for GPU support
To install nvidia-container-toolkit on Ubuntu/Debian:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker- Full functionality: Includes all the features of the regular ComfyUI installation
- Persistence: Data is stored in a mounted volume at
/data - Port exposure: Web UI available on port 8188
- Essential utilities: Includes bash, coreutils, git, and other necessary tools
- Proper environment: All environment variables set correctly for containerized operation
- GPU support: CUDA version includes proper environment variables for NVIDIA GPU access
The Docker image follows the same modular structure as the regular installation, ensuring consistency across deployment methods.
Docker images are automatically built and published to GitHub Container Registry via GitHub Actions:
- Trigger events: Push to main branch, version tags (v*), and pull requests
- Multi-architecture: CPU images built for both amd64 and arm64 (via QEMU emulation)
- Build matrix: CPU (multi-arch) and CUDA (x86_64 only) variants built in parallel
- Tagging strategy:
- Main branch pushes:
latestandX.Y.Z(version from flake.nix) - Version tags:
vX.Y.Zandlatest - Pull requests:
pr-N(for testing, not pushed to registry) - Architecture-specific:
latest-amd64,latest-arm64
- Main branch pushes:
- Registry: All images are publicly accessible at
ghcr.io/utensils/comfyui-nix - Build cache: Nix builds are cached using Cachix for faster CI runs
The workflow uses Nix to ensure reproducible builds and leverages the same build configuration as local builds, guaranteeing consistency between development and production environments.
Use --base-directory to store all data (models, input, output, custom_nodes) on a separate drive:
nix run github:utensils/comfyui-nix -- --base-directory ~/AIExpected structure in your base directory:
~/AI/
├── models/ # checkpoints, loras, vae, text_encoders, etc.
├── input/ # input files
├── output/ # generated outputs
├── custom_nodes/ # extensions
└── user/ # workflows and settings
For advanced setups with non-standard directory structures, use --extra-model-paths-config with a YAML file to map custom paths.
Run Flux 2 Dev without offloading using GGUF quantization:
nix run github:utensils/comfyui-nix -- --base-directory ~/AI --listen 0.0.0.0 --use-pytorch-cross-attention --cuda-malloc --lowvramModels (install ComfyUI-GGUF via Manager first):
# GGUF model → unet/ or diffusion_models/
curl -LO https://huggingface.co/orabazes/FLUX.2-dev-GGUF/resolve/main/flux2_dev_Q4_K_M.gguf
# Text encoder → text_encoders/
curl -LO https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/text_encoders/mistral_3_small_flux2_fp8.safetensorsPerformance: ~50s/generation (20 steps), ~18.5GB VRAM, no offloading required.
This flake is provided under the MIT license. ComfyUI itself is licensed under GPL-3.0.