We experimence on AeBAD blade dataset and MVTec AD.You can download the AeBAD from here and MVTec AD from here.
Then put the dataset in the ./datasets
folder, and the structure should be like this:
|-- data
|-- MVTec-AD
|-- mvtec_anomaly_detection
|-- object (bottle, etc.)
|-- train
|-- test
|-- ground_truth
|-- AeBAD
|-- AeBAD_S
|-- AeBAD_V
Download the pre-trained model of MobileViTv2 for ours model at here. You can also download the pretrained model from timm library, then use ./utils/weight_trans.py
to change the keys of the model to fit our model.
We recommend using a virtual environment as follows:
python>=3.10
pytorch>=1.12
cuda>=11.6
More details can be found in the requirements.txt
file.
Corresponding config for different datasets can be found in ./method_config/
. To change the datasets, you can change the default config path in ./utils/parser_.py
and start by main.py
or just start by following code:
sh mvtec_run.sh
sh AeBAD_S_run.sh
sh AeBAD_V_run.sh
Once you start the training, the model will be saved in ./logs_and_models
which you can define in config file named OUTPUT_ROOT_DIR
. After training, testing will be automatically performed and the results will be saved in same directory. You can also only start the testing by main_test.py
after changing the default model path.
Visualized and numerical results can be seen as follows. You can download the best model from here. More details can be found in paper.
We provide official implementation of the transformation between pytorch and ONNX, which can be also converted to TensorRT engine file. More details could be seen in the ONNX branch.
c++ implementation.
We acknowledge the excellent implementation as following: ConvMAE, MobileViTv2, MobileViTv2-pytorch, MMR