A pure Nix flake for ComfyUI with Python 3.12. Supports macOS (Intel/Apple Silicon) and Linux.
nix run github:utensils/comfyui-nix -- --openFor CUDA (Linux/NVIDIA):
nix run github:utensils/comfyui-nix#cudaAll ComfyUI CLI options are supported. Common examples:
| Flag | Description |
|---|---|
--open |
Open browser when ready |
--port=XXXX |
Custom port (default: 8188) |
--base-directory PATH |
Data directory for models, outputs, custom nodes |
--listen 0.0.0.0 |
Allow network access |
--enable-manager |
Enable built-in ComfyUI Manager |
--lowvram |
Reduce VRAM usage for limited GPUs |
--disable-api-nodes |
Disable built-in API nodes |
Default data locations:
- Linux:
~/.config/comfy-ui - macOS:
~/Library/Application Support/comfy-ui
The built-in ComfyUI Manager is included and can be enabled with --enable-manager:
nix run github:utensils/comfyui-nix#cuda -- --enable-managerHow it stays pure: The Nix store remains read-only. When custom nodes require additional Python dependencies, they install to <data-directory>/.pip-packages/ instead of the Nix store. Python finds both Nix-provided packages and runtime-installed packages via PYTHONPATH.
Nix packages (read-only): torch, pillow, numpy, transformers, etc.
Runtime packages (mutable): <data-directory>/.pip-packages/
A default manager config is created on first run with sensible defaults for personal use (security_level=normal, network_mode=personal_cloud).
The following custom nodes are bundled and automatically linked on first run:
A non-blocking async download node with WebSocket progress updates. Download models directly within ComfyUI without blocking the UI.
API Endpoints:
POST /api/download_model- Start a downloadGET /api/download_progress/{id}- Check progressGET /api/list_downloads- List all downloads
Impact Pack (v8.28) - Detection, segmentation, and more. License: GPL-3.0
- SAM (Segment Anything Model) - Meta AI's segmentation models
- SAM2 - Next-generation segmentation
- Detection nodes - Face detection, object detection
- Masking tools - Advanced mask manipulation
rgthree-comfy (v1.0.0) - Quality of life nodes. License: MIT
- Reroute nodes - Better workflow organization
- Context nodes - Pass multiple values through a single connection
- Power Lora Loader - Advanced LoRA management
- Bookmark nodes - Quick navigation in large workflows
KJNodes - Utility nodes for advanced workflows. License: GPL-3.0
- Batch processing - Efficient batch image handling
- Conditioning tools - Advanced prompt manipulation
- Image utilities - Resize, crop, color matching
- Mask operations - Create and manipulate masks
ComfyUI-GGUF - GGUF quantization support for native ComfyUI models. License: Apache-2.0
- GGUF model loading - Load quantized GGUF models directly in ComfyUI
- Low VRAM support - Run large models on GPUs with limited memory
- Flux compatibility - Optimized for transformer/DiT models like Flux
- T5 quantization - Load quantized T5 text encoders for additional VRAM savings
ComfyUI-LTXVideo - LTX-Video support for ComfyUI. License: Apache-2.0
- Video generation - Generate videos with LTX-Video models
- Frame conditioning - Interpolation between given frames
- Sequence conditioning - Motion interpolation for video extension
- Prompt enhancer - Optimized prompts for best model performance
ComfyUI-Florence2 - Microsoft Florence2 vision-language model. License: MIT
- Image captioning - Generate detailed captions from images
- Object detection - Detect and locate objects in images
- OCR - Extract text from images
- Visual QA - Answer questions about image content
ComfyUI_bitsandbytes_NF4 - NF4 quantization for Flux models. License: AGPL-3.0
- NF4 checkpoint loading - Load NF4 quantized Flux checkpoints
- Memory efficiency - Run Flux models with reduced VRAM usage
- Flux Dev/Schnell support - Compatible with both Flux variants
x-flux-comfyui - XLabs Flux LoRA and ControlNet. License: Apache-2.0
- Flux LoRA support - Load and apply LoRA models for Flux
- ControlNet integration - Canny, Depth, HED ControlNets for Flux
- IP Adapter - Image-prompt adaptation for Flux
- 12GB VRAM support - Optimized for consumer GPUs
ComfyUI-MMAudio - Synchronized audio generation from video. License: MIT
- Video-to-audio - Generate audio that matches video content
- Text-to-audio - Create audio from text descriptions
- Multi-modal training - Trained on audio-visual and audio-text data
- High-quality output - 44kHz audio generation
PuLID_ComfyUI - PuLID face ID for identity preservation. License: Apache-2.0
- Face ID transfer - Transfer identity from reference images
- Fidelity control - Adjust resemblance to reference
- Style options - Multiple projection methods available
- Flux compatible - Works with Flux models via PuLID-Flux
ComfyUI-WanVideoWrapper - WanVideo and related video models. License: Apache-2.0
- WanVideo support - Wrapper for WanVideo model family
- SkyReels support - Compatible with SkyReels models
- Video generation - Text-to-video and image-to-video
- Story mode - Generate coherent video sequences
All Python dependencies (segment-anything, sam2, scikit-image, opencv, color-matcher, gguf, diffusers, librosa, bitsandbytes, etc.) are pre-built and included in the Nix environment.
# Install to profile
nix profile install github:utensils/comfyui-nix
# Or use the overlay in your flake
{
inputs.comfyui-nix.url = "github:utensils/comfyui-nix";
# Then: nixpkgs.overlays = [ comfyui-nix.overlays.default ];
# Provides: pkgs.comfy-ui
}{
imports = [ comfyui-nix.nixosModules.default ];
nixpkgs.overlays = [ comfyui-nix.overlays.default ];
services.comfyui = {
enable = true;
port = 8188;
listenAddress = "127.0.0.1";
dataDir = "/var/lib/comfyui";
openFirewall = false;
# extraArgs = [ "--lowvram" ];
# environment = { };
};
}Install custom nodes reproducibly using customNodes:
services.comfyui = {
enable = true;
customNodes = {
# Fetch from GitHub (pinned version)
ComfyUI-Impact-Pack = pkgs.fetchFromGitHub {
owner = "ltdrdata";
repo = "ComfyUI-Impact-Pack";
rev = "v1.0.0";
hash = "sha256-..."; # nix-prefetch-github ltdrdata ComfyUI-Impact-Pack --rev v1.0.0
};
# Local path (for development)
my-node = /path/to/node;
};
};Nodes are symlinked at service start. This is the pure Nix approach - fully reproducible and version-pinned.
Pre-built images on GitHub Container Registry:
Docker:
# CPU (multi-arch: amd64 + arm64)
docker run -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latest
# CUDA (x86_64 only, requires nvidia-container-toolkit)
docker run --gpus all -p 8188:8188 -v "$PWD/data:/data" ghcr.io/utensils/comfyui-nix:latest-cudaPodman:
# CPU
podman run -p 8188:8188 -v "$PWD/data:/data:Z" ghcr.io/utensils/comfyui-nix:latest
# CUDA (requires nvidia-container-toolkit and CDI configured)
podman run --device nvidia.com/gpu=all -p 8188:8188 -v "$PWD/data:/data:Z" ghcr.io/utensils/comfyui-nix:latest-cudaPassing additional arguments:
When passing custom arguments, include --listen 0.0.0.0 to maintain network access:
# Docker with manager enabled
docker run --gpus all -p 8188:8188 -v "$PWD/data:/data" \
ghcr.io/utensils/comfyui-nix:latest-cuda --listen 0.0.0.0 --enable-manager
# Podman with manager enabled
podman run --device nvidia.com/gpu=all -p 8188:8188 -v "$PWD/data:/data:Z" \
ghcr.io/utensils/comfyui-nix:latest-cuda --listen 0.0.0.0 --enable-managerBuild locally:
nix run .#buildDocker # CPU
nix run .#buildDockerCuda # CUDA
# Load into Docker/Podman
docker load < result
podman load < resultNote: Docker/Podman on macOS runs CPU-only. For GPU acceleration on Apple Silicon, use nix run directly.
nix develop # Dev shell with Python 3.12, ruff, pyright
nix flake check # Run all checks (build, lint, type-check, nixfmt)
nix run .#update # Check for ComfyUI updates<data-directory>/
├── models/ # checkpoints, loras, vae, controlnet, etc.
├── output/ # Generated images
├── input/ # Input files
├── user/ # Workflows, settings, manager config
├── custom_nodes/ # Extensions (bundled nodes auto-linked)
├── .pip-packages/ # Runtime-installed Python packages
└── temp/
ComfyUI runs from the Nix store; only user data lives in your data directory.
Pre-built binaries are available via Cachix to avoid lengthy compilation times (especially for PyTorch/CUDA).
Quick setup (recommended):
# Install cachix if you don't have it
nix-env -iA cachix -f https://cachix.org/api/v1/install
# Add the ComfyUI cache
cachix use comfyui
# For CUDA builds, also add the CUDA maintainers cache
cachix use cuda-maintainersManual NixOS configuration:
{
nix.settings = {
substituters = [
"https://cache.nixos.org"
"https://comfyui.cachix.org"
"https://cuda-maintainers.cachix.org"
];
trusted-public-keys = [
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
"comfyui.cachix.org-1:33mf9VzoIjzVbp0zwj+fT51HG0y31ZTK3nzYZAX0rec="
"cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
];
};
}Non-NixOS systems (~/.config/nix/nix.conf):
substituters = https://cache.nixos.org https://comfyui.cachix.org https://cuda-maintainers.cachix.org
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= comfyui.cachix.org-1:33mf9VzoIjzVbp0zwj+fT51HG0y31ZTK3nzYZAX0rec= cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E=
The flake automatically configures these caches, but your Nix daemon must trust them. If you see packages building from source instead of downloading, check that your keys match exactly.
MIT (this flake). ComfyUI is GPL-3.0.