[2025.10.07] โ Thanks to smthemex for developing ComfyUI_LucidFlux, which enables LucidFlux to run with as little as 8 GBโ12 GB of memory through the ComfyUI integration.
[2025.10.06] -- LucidFlux now supports offload and precomputed prompt embeddings, eliminating the need to load T5 or CLIP during inference. These improvements reduce memory usage significantly โ inference can now run with as little as 28 GB VRAM, greatly enhancing deployment efficiency.
[2025.10.05] -- LucidFlux has been officially added to the Fal AI Playground! You can now try the online demo and access the Fal API directly here:
๐ LucidFlux on Fal AI
Let us know if this works!
Song Fei1*, Tian Ye1*โก, Lujia Wang1 , Lei Zhu1,2โ
1The Hong Kong University of Science and Technology (Guangzhou)
2The Hong Kong University of Science and Technology*Equal Contribution, โกProject Leader, โ Corresponding Author
LucidFlux is a caption-free universal image restoration framework that leverages a lightweight dual-branch conditioner and adaptive modulation to guide a large diffusion transformer (Flux.1) with minimal overhead, achieving robust, high-fidelity restoration without relying on text prompts or MLLM captions.
| LQ | SinSR | SeeSR | SUPIR | DreamClear | Ours |
| LQ | HYPIR-FLUX | Topaz | Seedream 4.0 | MeiTu SR | Gemini-NanoBanana | Ours |
Our unified framework consists of four critical components in the training workflow:
๐จ Dual-Branch Conditioner for Low-Quality Image Conditioning
๐ฏ Timestep and Layer-Adaptive Condition Injection
๐ Semantic Priors from Siglip for Caption-Free Semantic Alignment
๐ค Scaling Up Real-world High-Quality Data for Universal Image Restoration
โ ๏ธ The default setup requires roughly 28 GB of GPU VRAM.
# Clone the repository
git clone https://github.com/W2GenAI-Lab/LucidFlux.git
cd LucidFlux
# Create conda environment
conda create -n lucidflux python=3.11
conda activate lucidflux
# Install PyTorch (CUDA 12.8 wheels)
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
# Install remaining dependencies
pip install -r requirements.txt
pip install --upgrade timm
Prepare models in 2 steps, then run a single command.
- Login to Hugging Face (required for gated FLUX.1-dev). Skip if already logged-in.
python -m tools.hf_login --token "$HF_TOKEN"- Download required weights to fixed paths and export env vars
# FLUX.1-dev (flow+ae), SwinIR prior, T5, CLIP, SigLIP and LucidFlux checkpoint to ./weights
python -m tools.download_weights --dest weights
# Exports FLUX_DEV_FLOW/FLUX_DEV_AE to your shell (Linux/macOS)
source weights/env.sh
# Windows: open `weights\env.sh`, replace each leading `export` with `set`, then paste those commands into Command PromptRun inference (uses fixed relative paths):
bash inference.shโน๏ธ LucidFlux builds on Flux-based generative priors. Restored images can differ from the low-quality input because the model removes degradations and hallucinates realistic details by design. Visual discrepancies are expected and indicate the generative nature of the method.
You can also obtain results of LucidFlux on RealSR and RealLQ250 from Hugging Face: LucidFlux.
For the purpose of fostering research and the open-source community, we plan to open-source the entire project, encompassing training, inference, weights, etc. Thank you for your patience and support! ๐
- Release github repo.
- Release inference code.
- Release model checkpoints.
- Release arXiv paper.
- Release training code.
- Release the data filtering pipeline.
If you find LucidFlux useful for your research, please cite our report:
@article{fei2025lucidflux,
title={LucidFlux: Caption-Free Universal Image Restoration via a Large-Scale Diffusion Transformer},
author={Fei, Song and Ye, Tian and Wang, Lujia and Zhu, Lei},
journal={arXiv preprint arXiv:2509.22414},
year={2025}
}The provided code and pre-trained weights are licensed under the FLUX.1 [dev].
-
This code is based on FLUX. Some code are brought from DreamClear, x-flux. We thank the authors for their awesome work.
-
๐๏ธ Thanks to our affiliated institutions for their support.
-
๐ค Special thanks to the open-source community for inspiration.
For any questions or inquiries, please reach out to us:
- Song Fei:
sfei285@connect.hkust-gz.edu.cn - Tian Ye:
tye610@connect.hkust-gz.edu.cn