Skip to content

shoryasethia/RobustSIRR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

32 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Competition: Qualcomm VisionX

  • Team Name: ClarifyAI ( Team Id: Qual-230517)
  • Rank 4

Here's our Work

Team

  • Anupam Rawat
  • Rahul Choudhary
  • Shorya Sethia
  • Suraj Prasad

Architecture

Report

Check out summary of our work in this report

The Table of Contents

🏠 Dependencies and installation

  • Python >= 3.8.5
  • PyTorch >= 1.11
  • CUDA >= 11.3
  • Other required packages in requirements.txt
# git clone this repository
git clone https://github.com/shoryasethia/RobustSIRR.git
cd RobustSIRR

# create new anaconda env
conda create -n sirr python=3.8 -y
conda activate sirr

# install python dependencies by pip
pip install -r requirements.txt

πŸ‘ How to run

1️⃣ Download Pre-trained Models

🌟 Download the pre-trained RobustSIRR models from [Pre-trained_RobustSIRR_BaiduYunDisk (pwd:sirr), Google Drvie] to the checkpoints folder.

2️⃣ Prepare Dataset

Training Dataset

  • 7,643 cropped images with size 224 Γ— 224 from Pascal VOC dataset (image ids are provided in VOC2012_224_train_png.txt, you should crop the center region with size 224 x 224 to reproduce our result )
  • 90(89) real-world training images from Berkeley real dataset

❗ Place the processed VOC2012 and real datasets in the datasets folder, and name them VOC2012 and real89 respectively.

🌟For convenience, you can directly download the prepared training dataset from [ VOC2012_For_RobustSIRR_BaiduYunDisk (pwd:sirr) , Google Drvie] and [ real89_For_RobustSIRR_BaiduYunDisk (pwd:sirr) , Google Drvie].

Testing Dataset

❗ Place the processed datasets in the datasets folder, and name them as real20, SIR2, and nature20 respectively.

🌟For convenience, you can directly download the prepared testing dataset from [ TestingDataset_For_RobustSIRR_BaiduYunDisk (pwd:sirr) , Google Drvie].

Dataset Architectures

The hierarchical structure of all datasets is illustrated in the following diagram.

datasets
β”œβ”€β”€ nature20
β”‚Β Β  β”œβ”€β”€ blended
β”‚Β Β  └── transmission_layer
β”œβ”€β”€ real20
β”‚Β Β  β”œβ”€β”€ blended
β”‚Β Β  β”œβ”€β”€ real_test.txt
β”‚Β Β  └── transmission_layer
β”œβ”€β”€ real89
β”‚Β Β  β”œβ”€β”€ blended
β”‚Β Β  └── transmission_layer
β”œβ”€β”€ SIR2
β”‚Β Β  β”œβ”€β”€ PostcardDataset
β”‚   β”‚   β”œβ”€β”€ blended
β”‚   β”‚   β”œβ”€β”€ reflection
β”‚   β”‚   └── transmission_layer
β”‚Β Β  β”œβ”€β”€ SolidObjectDataset
β”‚   β”‚   β”œβ”€β”€ blended
β”‚   β”‚   β”œβ”€β”€ reflection
β”‚   β”‚   └── transmission_layer
β”‚Β Β  └── WildSceneDataset
β”‚       β”œβ”€β”€ blended
β”‚       β”œβ”€β”€ reflection
β”‚       └── transmission_layer
└── VOC2012
    β”œβ”€β”€ blended
    β”œβ”€β”€ JPEGImages
    β”œβ”€β”€ reflection_layer
    β”œβ”€β”€ reflection_mask_layer
    β”œβ”€β”€ transmission_layer
    └── VOC_results_list.json

Note:

  • transmission_layer is GT, blended is Input, and reflection/reflection_layer is the reflection part
  • For the SIR^2 dataset, we only standardize the folder structure

3️⃣ Train

  • For adv. training:
# To Be Released
  • For clean images training:
# ours_cvpr
CUDA_VISIBLE_DEVICES=0 python train.py --name ours --gpu_id 0 --no-verbose --display_id -1 --batchSize 4

# ours_wo_aid 
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aid --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aid

# ours_wo_aff
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aff --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aff

# ours_wo_scm
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_scm --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_scm

Note:

  • Check options/robustsirr/train_options.py to see more training options.

4️⃣ Test

CUDA_VISIBLE_DEVICES=0 python test.py --name ours_cvpr --hyper --gpu_ids 0 -r --no-verbose --save_gt --save_attack --save_results

5️⃣ Evaluate the Robustness of Our model on All dataset

# To Be Released
# Due to confidentiality concerns. Alternatively, you can refer to https://github.com/yuyi-sd/Robust_Rain_Removal

☺️ Results

☝️ Comparison of the PSNR values with respect to perturbation levels $\epsilon$ for different attacks on various datasets. β€˜MSE FR Nature’ represents attacking on Full Region with MSE objective on the Nature dataset, and so the others.

☝️ Comparison of different training strategies on three benchmark datasets. β€˜w/’ and β€˜w/o adv.’ mean training with or without adversarial images. MSE and LPIPS denote corresponding attacks over Full regions. ↓ and ↑ represent the degradation and improvement performance compared to the original prediction inputting clean images.

About

Qualcomm VisionX, Team Name : ClarifyAI, Rank : 4

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages