Implementation for our paper. The paper can be found at AnomalyHop.
- Training in two minutes for each class
- No Neural Network / No Pre-training
- Successive Subspace Learning
We provide the environment setup information for virtual environment (anacodna). The environment can be easily install with (conda required):
conda env create -f environment.yml
conda activate AnomalyHop
MVTec AD datasets can be downloaded from: MVTec website
It can also be downloaded by using our following commands:
cd datasets
wget ftp://guest:[email protected]/mvtec_anomaly_detection/mvtec_anomaly_detection.tar.xz
tar -xf mvtec_anomaly_detection.tar.xz
cd ..
An example for carpet class:
python ./src/main.py --kernel 7 6 3 2 4 --num_comp 4 4 4 4 4 --layer_of_use 1 2 3 4 5 --distance_measure glo_gaussian --hop_weights 0.2 0.2 0.4 0.5 0.1 --class_names carpet
To reproduce all results from the paper:
chmod +x run.sh
./run.sh
- Anomaly Localization (ROCAUC)
MvTec AD | AnomalyHop (ours) |
---|---|
Carpet | 0.942 |
Grid | 0.984 |
Leather | 0.991 |
Tile | 0.932 |
Wood | 0.903 |
Bottle | 0.975 |
Cable | 0.904 |
Capsule | 0.965 |
Hazelnut | 0.971 |
Metal nut | 0.956 |
Pill | 0.970 |
Screw | 0.960 |
Toothbrush | 0.982 |
Transistor | 0.981 |
Zipper | 0.966 |
All classes | 0.959 |
- Examples from cable, capsule and wood classes.
- Examples from grid class.
If you want to save the figures, just uncomment the following line in main.py
plot_fig(test_imgs, scores_final, gt_mask_list, threshold, save_dir, class_name)
If you find our model is useful in your research, please consider cite our paper: AnomalyHop: An SSL-based Image Anomaly Localization Method:
@article{anomalyhop,
title={{AnomalyHop}: An SSL-based Image Anomaly Localization Method}},
author={Zhang, Kaitai and Wang, Bin and Wang, Wei and Sohrab, Fahad and Gabbouj, Moncef and Kuo, C.-C. Jay},
journal={arXiv preprint arXiv:2105.03797},
year={2021}
}
@article{anomalyhop,
title={{AnomalyHop}: An SSL-based Image Anomaly Localization Method}},
author={Zhang, Kaitai and Wang, Bin and Wang, Wei and Sohrab, Fahad and Gabbouj, Moncef and Kuo, C.-C. Jay},
journal={IEEE Visual Communications and Image Processing (VCIP)},
year={2021}
}
Contact person: Bin Wang, [email protected]
The code is partially adapted from PaDiM