- Team Name: ClarifyAI ( Team Id: Qual-230517)
- Rank 4
- Anupam Rawat
- Rahul Choudhary
- Shorya Sethia
- Suraj Prasad
Check out summary of our work in this report
- Python >= 3.8.5
- PyTorch >= 1.11
- CUDA >= 11.3
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/shoryasethia/RobustSIRR.git
cd RobustSIRR
# create new anaconda env
conda create -n sirr python=3.8 -y
conda activate sirr
# install python dependencies by pip
pip install -r requirements.txt
π Download the pre-trained RobustSIRR models from [Pre-trained_RobustSIRR_BaiduYunDisk (pwd:sirr), Google Drvie] to the checkpoints folder.
- 7,643 cropped images with size 224 Γ 224 from Pascal VOC dataset (image ids are provided in VOC2012_224_train_png.txt, you should crop the center region with size 224 x 224 to reproduce our result )
- 90(89) real-world training images from Berkeley real dataset
β Place the processed VOC2012 and real datasets in the datasets folder, and name them VOC2012 and real89 respectively.
πFor convenience, you can directly download the prepared training dataset from [ VOC2012_For_RobustSIRR_BaiduYunDisk (pwd:sirr) , Google Drvie] and [ real89_For_RobustSIRR_BaiduYunDisk (pwd:sirr) , Google Drvie].
- 20 real testing images from Berkeley real dataset
- Three sub-datasets, namely βObjectsβ, βPostcardβ, βWildβ from SIR2 dataset
- 20 testing images from Nature
β Place the processed datasets in the datasets folder, and name them as real20, SIR2, and nature20 respectively.
πFor convenience, you can directly download the prepared testing dataset from [ TestingDataset_For_RobustSIRR_BaiduYunDisk (pwd:sirr) , Google Drvie].
The hierarchical structure of all datasets is illustrated in the following diagram.
datasets
βββ nature20
βΒ Β βββ blended
βΒ Β βββ transmission_layer
βββ real20
βΒ Β βββ blended
βΒ Β βββ real_test.txt
βΒ Β βββ transmission_layer
βββ real89
βΒ Β βββ blended
βΒ Β βββ transmission_layer
βββ SIR2
βΒ Β βββ PostcardDataset
β β βββ blended
β β βββ reflection
β β βββ transmission_layer
βΒ Β βββ SolidObjectDataset
β β βββ blended
β β βββ reflection
β β βββ transmission_layer
βΒ Β βββ WildSceneDataset
β βββ blended
β βββ reflection
β βββ transmission_layer
βββ VOC2012
βββ blended
βββ JPEGImages
βββ reflection_layer
βββ reflection_mask_layer
βββ transmission_layer
βββ VOC_results_list.json
Note:
transmission_layeris GT,blendedis Input, andreflection/reflection_layeris the reflection part- For the SIR^2 dataset, we only standardize the folder structure
- For adv. training:
# To Be Released- For clean images training:
# ours_cvpr
CUDA_VISIBLE_DEVICES=0 python train.py --name ours --gpu_id 0 --no-verbose --display_id -1 --batchSize 4
# ours_wo_aid
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aid --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aid
# ours_wo_aff
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aff --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aff
# ours_wo_scm
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_scm --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_scm
Note:
- Check
options/robustsirr/train_options.pyto see more training options.
CUDA_VISIBLE_DEVICES=0 python test.py --name ours_cvpr --hyper --gpu_ids 0 -r --no-verbose --save_gt --save_attack --save_results# To Be Released
# Due to confidentiality concerns. Alternatively, you can refer to https://github.com/yuyi-sd/Robust_Rain_Removal
βοΈ Comparison of the PSNR values with respect to perturbation levels
βοΈ Comparison of different training strategies on three benchmark datasets. βw/β and βw/o adv.β mean training with or without adversarial images. MSE and LPIPS denote corresponding attacks over Full regions. β and β represent the degradation and improvement performance compared to the original prediction inputting clean images.