Skip to content

jamjimmy/cv_benchmark_hub

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Computer Vision Benchmark Hub

Outlook

逻辑较为简单,请参考教程。有问题可以联系我~

中文教程 | Guidance

Welcome to the Computer Vision Benchmark Hub — a flexible and user-friendly tool designed to evaluate and benchmark image-based models. With support for key metrics such as SSIM, PSNR, CLIP-Score, and more, our platform enables seamless one-click evaluations, helping you compare and optimize model performance effortlessly.

🚀 Installation

To get started, simply install the required dependencies:

conda create -n cv_benchmark python==3.10
conda activate cv_benchmark
pip install -r requirments.txt

📁 Data Preparation

The toolkit supports two types of inputs: folders and JSON files.

Folder Structure

Ensure your input folder is organized as follows:

input_path
├── input_dir1 
│   ├── dir1 
│   │   ├── img1.jpg # all the filename must match in each dir
│   │   ├── img2.jpg 
│   │   └── ...
│   └── dir2 
│       ├── img1.jpg
│       ├── img2.jpg
│       └── ...
│
├── input_dir2 
│   ├── dir1 
│       └── ...
│   └── dir2 
│       └── ...
...

JSON Format

Alternatively, you can use a JSON file structured like this:

[
    {
        "args1": "image path or text of target",
        "args2": "image path or text of pred",
    },
    
    {
            ...
    },
    {
        "args1": "image path or text of target",
        "args2": "image path or text of pred",
    }
]

⚙️ Configuration

The toolkit uses a YAML configuration file for customization.

Input Paths

path_key: Filters files containing specific keywords in their names within the --input_path directory. special_input_paths: A list of paths that require special processing.

Keys

Define the input keys corresponding to your JSON or folder structure:

keys:
  pred_key: args1
  target_key: args2

📊 Usage

Run the benchmark using the following command:

python benchmark.py --config config/example.yaml --input_path example_data --output_path ./result/

The result will be saved in ./result/test1 folder. There will be a 'benchmark.xlsx' file.

✅ TODO

Completed

  • Support Diffusion Metric:SSIM, PSNR, CLIP-Score, LIPIS

  • generate latex

  • L1, L2...

  • Upcoming Features


Feel free to tweak the config and data formats to suit your needs. Happy benchmarking! 🎯

Star History

Star History Chart

About

A flexible and user-friendly tool designed to evaluate and benchmark image-based models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages