Skip to content
/ FLAME Public

[CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"

License

Notifications You must be signed in to change notification settings

MIV-XJTU/FLAME

Repository files navigation

CVPR 2025 | FLAME

FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training
Anjia Cao, Xing Wei, Zhiheng Ma

📰 News

💡 Highlights

  • 🔥 Leveraging frozen LLMs to naturally process long text inputs.
  • 🔥 Generalizing from monolingual training to multilingual evaluation.
  • 🔥 Strong improvement on long/short-context image-text retrieval, image classification, and multilingual scenarios.

📅 TODO Roadmap

  • Release training code and data.
  • Release evaluation code.
  • Release pre-trained checkpoints.

🛠️ Get Started

Setup

git clone https://github.com/MIV-XJTU/FLAME.git
cd FLAME
conda create -n flame python=3.10 -y
conda activate flame
make install
make install-training
make install-test

Training

See Training.md.

Evaluation

See Evaluation.md.

📁 Datasets

Dataset Link
CC3M-ReCap Hugging Face
YFCC15M-ReCap Hugging Face

🔐 Pre-trained Checkpoints

Dataset Model Link
CC3M Mistral-Nemo-ViT-B/16 Hugging Face

🛂 License

The project is under a standard Creative Common CC-BY-4.0 License.

📖 Citation

If you find our work helpful for your research, please consider giving a star and citation.

@inproceedings{cao2025flame,
  title={FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training},
  author={Cao, Anjia and Wei, Xing and Ma, Zhiheng},
  booktitle={CVPR},
  year={2025}
}

🫡 Acknowledgements

This project is based on open_clip, and thanks for the nice work! We also thank CLIP_benchmark, DreamLIP, Long-CLIP, PromptEOL, and MiniCPM-V for their codes.

About

[CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published