Skip to main content

Research Repository

Advanced Search

Autoencoders Without Reconstruction for Textural Anomaly Detection

Adey, P.A.; Akcay, S.; Bordewich, M.J.R.; Breckon, T.P.

Autoencoders Without Reconstruction for Textural Anomaly Detection Thumbnail


Authors

P.A. Adey

S. Akcay



Abstract

Automatic anomaly detection in natural textures is a key component within quality control for a range of high-speed, high-yield manufacturing industries that rely on camera-based visual inspection techniques. Targeting anomaly detection through the use of autoencoder reconstruction error readily facilitates training on an often more plentiful set of non-anomalous samples, without the explicit need for a representative set of anomalous training samples that may be difficult to source. Unfortunately, autoencoders struggle to reconstruct high-frequency visual information and therefore, such approaches often fail to achieve a low enough reconstruction error for non-anomalous pixels. In this paper, we propose a new approach in which the autoencoder is trained to directly output the desired per-pixel measure of abnormality without first having to perform reconstruction. This is achieved by corrupting training samples with noise and then predicting how pixels need to be shifted so as to remove the noise. Our direct approach enables the model to compress anomaly scores for normal pixels into a tight bound close to zero, resulting in very clean anomaly segmentations that significantly improve performance. We also introduce the Reflected ReLU output activation function that better facilitates training under this direct regime by leaving values that fall within the image dynamic range unmodified. Overall, an average area under the ROC curve of 96% is achieved on the texture classes of the MVTecAD benchmark dataset, surpassing that achieved by all current state-of-the-art methods.

Citation

Adey, P., Akcay, S., Bordewich, M., & Breckon, T. (2021). Autoencoders Without Reconstruction for Textural Anomaly Detection. . https://doi.org/10.1109/ijcnn52387.2021.9533804

Conference Name 2021 International Joint Conference on Neural Networks (IJCNN)
Conference Location Shenzhen, China
Start Date Jul 18, 2021
End Date Jul 22, 2021
Acceptance Date Apr 12, 2021
Online Publication Date Sep 20, 2021
Publication Date 2021
Deposit Date Apr 22, 2021
Publicly Available Date Apr 23, 2021
Publisher Institute of Electrical and Electronics Engineers
DOI https://doi.org/10.1109/ijcnn52387.2021.9533804

Files

Accepted Conference Proceeding (11.7 Mb)
PDF

Copyright Statement
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.





You might also like



Downloadable Citations