Skip to content

Mahendra114027/openset-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project Openset Recognition

Project Cover Image

Open-set recognition (OSR) is a task involving handling of unknown classes that are not part of the training dataset. Traditionally, classifiers assume that only known classes appear in the test environment. The idea requires not only to identify and discriminate instances that belong to the seen known classes contained in the training dataset, but also to reject unknown classes during the testing phase. With OSR, we can detect unknown observations during the inference time of the classification process.

This project is a part of Pattern Recognition lab at Friedrich-Alexander-Universität Erlangen-Nürnberg.

Table of Contents

Research Papers

Following research papers were used to implement the methods to perform openset recognition task.

  • Class Anchor Clustering: A Loss for Distance-based Open Set Recognition Link
  • Generative-discriminative Feature Representations for Open-set Recognition Link

Installation

Prerequisites

  • Python 3.8
  • TensorFlow (version x.x)

To get started with this project, follow the installation instructions below.

  1. Clone the repository:
git clone https://github.com/Mahendra114027/openset-recognition.git
cd openset-recognition
  1. Install the required dependencies:
pip install -r requirements.txt

Project Structure

├── LICENSE
├── README.md          <- The top-level README for developers using this project.
├── docs               <- Documentation for the project
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the results
│
├── osr                <- Source code for use in this project.
│   ├── config           <- Enumerations for storing configurations required by the project
│   │   └── make_dataset.py
│   │
│   ├── notebooks       <- Jupyter Notebooks 
│   ├── factory       <- Factory classes for models, dataset configurations, pipelines 
│   ├── models       <- All models are defined here
│   ├── evaluator       <- Evaluators for the models under consideration are defined here
│   ├── pipeline       <- Pipelines for training models
│   ├── loss       <- All custom losses are defined here
│   ├── utils       <- All project utilities, helper methods are defined here
└── main.py            <- Main script to run the project

Documentation

Project documentation is available at DOCUMENTATION.md

Usage

  • Train a ClosedSetClassifier model
python main.py --model csc --mode train --dataset mnist --trial_num 0
  • Evaluate a ClosedSetClassifier model
python main.py --model csc --mode eval --dataset mnist --summary_path <path>

Consolidated Results & Plots

To generate final consolidated results across all trials for all models for both MNIST and CIFAR10 dataset, following steps are to be followed:

  • Use the same environment as the kernel for the notebook osr/notebooks/Consolidate Evaluation.ipynb
  • Update the required paths across all cells
  • Run all cells sequentually

Contributing

We welcome contributions to this project. To contribute, please follow these steps:

  • Fork the repository and create a new branch for your feature or bug fix.
  • Make your changes and ensure that the code follows the project's coding standards.
  • Write adequate comments to explain your changes or additions.
  • Submit a pull request with a clear and concise description of your changes.
  • We will review your pull request and merge it if it aligns with the project's goals and quality standards.

License

This project is licensed under the License TBA.

Acknowledgments

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •