Run the latest state-of-the-art generative image models locally on your Mac in native MLX!
- π‘ Philosophy
- πΏ Installation
- π¨ Models
- β¨ Features
- π± Related projects
- π Acknowledgements
- βοΈ License
MFLUX is a line-by-line MLX port of several state-of-the-art generative image models from the Huggingface Diffusers and Huggingface Transformers libraries. All models are implemented from scratch in MLX, using only tokenizers from the Huggingface Transformers library. MFLUX is purposefully kept minimal and explicit, @karpathy style.
If you haven't already, install uv, then run:
uv tool install --upgrade mfluxAfter installation, the following command shows all available MFLUX CLI commands:
uv tool list To generate your first image using, for example, the z-image-turbo model, run
mflux-generate-z-image-turbo \
--prompt "A puffin standing on a cliff" \
--width 1280 \
--height 500 \
--seed 42 \
--steps 9 \
-q 8
The first time you run this, the model will automatically download which can take some time. See the model section for the different options and features, and the common README for shared CLI patterns and examples.
Python API
Create a standalone generate.py script with inline uv dependencies:
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "mflux",
# ]
# ///
from mflux.models.z_image import ZImageTurbo
model = ZImageTurbo(quantize=8)
image = model.generate_image(
prompt="A puffin standing on a cliff",
seed=42,
num_inference_steps=9,
width=1280,
height=500,
)
image.save("puffin.png")Run it with:
uv run generate.pyFor more Python API inspiration, look at the CLI entry points for the respective models.
β οΈ Troubleshooting: hf_transfer error
If you encounter a ValueError: Fast download using 'hf_transfer' is enabled (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not available, you can install MFLUX with the hf_transfer package included:
uv tool install --upgrade mflux --with hf_transferThis will enable faster model downloads from Hugging Face.
DGX / NVIDIA (uv tool install)
uv tool install --python 3.13 mfluxMFLUX supports the following model families. They have different strengths and weaknesses; see each modelβs README for full usage details.
| Model | Release date | Size | Type | Description |
|---|---|---|---|---|
| Z-Image | Nov 2025 | 6B | Distilled | Best all-rounder: fast, small, very good quality and realism. |
| FLUX.2 | Jan 2026 | 4B & 9B | Distilled | Fastest + smallest with very good qaility and edit capabilities. |
| FIBO | Oct 2025 | 8B | Base | Very good JSON-based prompt understanding and editability, medium speed |
| SeedVR2 | Jun 2025 | 3B | β | Best upscaling model. |
| Qwen Image | Aug 2025+ | 20B | Base | Large model (slower); strong prompt understanding and world knowledge. Has edit capabilities |
| Depth Pro | Oct 2024 | β | β | Very fast and accurate depth estimation model from Apple. |
| FLUX.1 | Aug 2024 | 12B | Distilled & Base | Legacy option with decent quality. Has edit capabilities with 'Kontext' model and upscaling support via ControlNet |
General
- Quantization and local model loading
- LoRA support (multi-LoRA, scales, library lookup)
- Metadata export + reuse, plus prompt file support
Model-specific highlights
- Text-to-image and image-to-image generation.
- In-context editing, multi-image editing, and virtual try-on
- ControlNet (Canny), depth conditioning, fill/inpainting, and Redux
- Upscaling (SeedVR2 and Flux ControlNet)
- LoRA finetuning using the Dreambooth technique
- Depth map extraction and FIBO prompt tooling (VLM inspire/refine)
See the common README for detailed usage and examples, and use the model section above to browse specific models and capabilities.
Note
As MFLUX supports a wide variety of CLI tools and options, the easiest way to navigate the CLI in 2026 is to use a coding agent (like Cursor, Claude Code, or similar). Ask questions like: βCan you help me generate an image using z-image?β
- MindCraft Studio by @shaoju
- Mflux-ComfyUI by @raysers
- MFLUX-WEBUI by @CharafChnioune
- mflux-fasthtml by @anthonywu
- mflux-streamlit by @elitexp
MFLUX would not be possible without the great work of:
- The MLX Team for MLX and MLX examples
- Black Forest Labs for the FLUX project
- Tongyi Lab for the Z-Image project
- Qwen Team for the Qwen Image project
- ByteDance, @numz and @adrientoupet for the SeedVR2 project
- Hugging Face for the Diffusers library implementations
- Depth Pro authors for the Depth Pro model
- The MLX community and all contributors and testers
This project is licensed under the MIT License.