Skip to content

hli1221/OCCO

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OCCO

Implementation of our work:

Hui Li*, Congcong Bian, Zeyang Zhang, Xiaoning Song, Xi Li, and Xiao-jun Wu📭, "OCCO: LVM-guided Infrared and Visible Image Fusion Framework based on Object-aware and Contextual COntrastive Learning", International Journal of Computer Vision (IJCV), 2025.

[Paper] [Arxiv]

Installation

Clone repo:

git clone https://github.com/bociic/OCCO.git
cd OCCO

The code is tested with Python == 3.9, PyTorch == 2.1.0 and CUDA == 12.1 on NVIDIA GeForce RTX 4090D, you may use a different version according to your GPU.

conda create -n occo python=3.9
conda activate occo
pip install -r requirements.txt

Quick Test

You can download the pre-trained weights from occo and place them in the ./logs/.

python main.py \
--test --use_gpu \    
--test_vis ./path/to/VIS \
--test_ir ./path/to/IR 

To work with your own test set, make sure to use the same file names for each infrared-visible image pair if you prefer not to edit the code.

Training-Datasets

Organize the dataset folder in the following structure:

<datasets>
|-- <DatasetName1>
    |-- <vi>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<ImageFormat>
        ...
    |-- <ir>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<ImageFormat>
        ...
    |-- <mask_vi>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<ImageFormat>
    |-- <mask_ir>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<LabelFormat>

You can download the masks generated by Grounding-DINO and SAM from here: FMB and MSRS. Alternatively, you can generate your own masks, but please ensure that the final masks are binary masks.

Training

python main.py --train --epoch 30 --bs 30 --logdir <checkpoint_path> --dataset /path/to/ir/ --use_gpu

Please note that dataset points to the infrared image folder, not the entire dataset folder.

Contact

If you have any questions about the code, please email us or open an issue,

Hui Li(lihui.cv@jiangnan.edu.cn) or Congcong Bian(bociic_jnu_cv@163.com).

Citation

If you find this paper/code helpful, please consider citing us:

@misc{occo,
      title={OCCO: LVM-guided Infrared and Visible Image Fusion Framework based on Object-aware and Contextual COntrastive Learning}, 
      author={Hui Li and Congcong Bian and Zeyang Zhang and Xiaoning Song and Xi Li and Xiao-Jun Wu},
      year={2025},
      eprint={2503.18635},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.18635}, 
}

About

The code of OCCO-infrared and visible image fusion

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%