1.58-bit FLUX

C Yang, C Liu, X Deng, D Kim, X Mei, X Shen… - arXiv preprint arXiv …, 2024 - arxiv.org
… Efficiency measurements on the vision transformer component of FLUX and 1.58-bit
FLUX. The measurements are based on generating a single image with 50 inference steps. (a) …

ParetoQ: Improving scaling laws in extremely low-bit LLM quantization

Z Liu, C Zhao, H Huang, S Chen… - Advances in …, 2026 - proceedings.neurips.cc
… Our empirical analysis indicates that quantization at 1.58-bit, 2-bit, and 3-bit offers a
superior trade-off between accuracy and effective quantized model size compared to 4-bit, …

Plug-and-play 1. x-bit kv cache quantization for video large language models

K Tao, H You, Y Sui, C Qin, H Wang - arXiv preprint arXiv:2503.16257, 2025 - arxiv.org
… • For value cache, we propose a 1.58-bit quantization scheme while selecting a few … bit
and 1.58-bit precision, with almost no accuracy drop compared to the FP16 counterparts. …

Plug-and-Play 1. x-Bit KV Cache Quantization for Video Large Language Models

TAO Keda, H You, Y Sui, C Qin, H Wang - openreview.net
… • For value cache, we propose a 1.58-bit quantization scheme while selecting a few … bit
and 1.58-bit precision, with almost no accuracy drop compared to the FP16 counterparts. …

Memory-Efficient Generative Models via Product Quantization

J Shao, H Zhang, H Yu, J Wu - Proceedings of the IEEE …, 2025 - openaccess.thecvf.com
… Additionally, we exclude 1.58-bit FLUX [65] due to the lack of publicly available implementation
details. A key limitation of existing compression methods is that they fall short in reducing …

Quantized DiT with hadamard transformation: A technical report

Y Liu, W Yang, J Jiao - Pattern Recognition Letters, 2025 - Elsevier
Diffusion Transformers (DiTs) combine the scalability of transformers with the fidelity of
diffusion models, achieving state-of-the-art image generation performance. However, their high …

Architectural and Performance Analysis

JMP Sinha, S Choudhary, S Rawat - Proceedings of Data …, 2025 - books.google.com
Flux is an advanced text-to-image generation model by Black … Considering the example
of 1.58-bit FLUX a pivotal … The Flux model excelled Big sleep by achieving a lower FID score…

Architectural and Performance Analysis of Text-to-Image and Text-to-Video Generative Models

J Masiwal, P Sinha, S Choudhary, S Rawat - International Conference on …, 2025 - Springer
Flux is an advanced text-to-image generation model by Black … Considering the example
of 1.58-bit FLUX a pivotal … The Flux model excelled Big sleep by achieving a lower FID score…

Dense2moe: Restructuring diffusion transformer to moe for efficient text-to-image generation

Y Zheng, Y Ren, X Xia, X Xiao… - Proceedings of the IEEE …, 2025 - openaccess.thecvf.com
… Compared to FLUX.1-lite, our 5.2B acitvated parameter FLUX.1-MoE-L model achieves
better … it is the 3.19B activated FLUX.1MoE-S or the 2.64B activated FLUX.1-MoE-XS, signifi- …

[HTML][HTML] Evaluating the FLUX. 1 synthetic data on YOLOv9 for AI-powered poultry farming

S Cakic, T Popovic, S Krco, I Jovovic, D Babic - Applied Sciences, 2025 - mdpi.com
… A hybrid dataset was created by combining real images of chickens with 400 FLUX.1 [dev]
generated synthetic images, aiming to reduce reliance on extensive manual data collection. …