Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Using deep Convolutional Neural Network architectures for object classification and detection within X-ray baggage security imagery.

Akcay, S. and Kundegorski, M.E. and Willcocks, C.G. and Breckon, T.P. (2018) 'Using deep Convolutional Neural Network architectures for object classification and detection within X-ray baggage security imagery.', IEEE transactions on information forensics and security., 13 (9). pp. 2203-2215.

Abstract

We consider the use of deep Convolutional Neural Networks (CNN) with transfer learning for the image classification and detection problems posed within the context of X-ray baggage security imagery. The use of the CNN approach requires large amounts of data to facilitate a complex end-to-end feature extraction and classification process. Within the context of Xray security screening, limited availability of object of interest data examples can thus pose a problem. To overcome this issue, we employ a transfer learning paradigm such that a pre-trained CNN, primarily trained for generalized image classification tasks where sufficient training data exists, can be optimized explicitly as a later secondary process towards this application domain. To provide a consistent feature-space comparison between this approach and traditional feature space representations, we also train Support Vector Machine (SVM) classifier on CNN features. We empirically show that fine-tuned CNN features yield superior performance to conventional hand-crafted features on object classification tasks within this context. Overall we achieve 0.994 accuracy based on AlexNet features trained with Support Vector Machine (SVM) classifier. In addition to classification, we also explore the applicability of multiple CNN driven detection paradigms such as sliding window based CNN (SW-CNN), Faster RCNN (F-RCNN), Region-based Fully Convolutional Networks (R-FCN) and YOLOv2. We train numerous networks tackling both single and multiple detections over SW-CNN/F-RCNN/RFCN/ YOLOv2 variants. YOLOv2, Faster-RCNN, and R-FCN provide superior results to the more traditional SW-CNN approaches. With the use of YOLOv2, using input images of size 544×544, we achieve 0.885 mean average precision (mAP) for a six-class object detection problem. The same approach with an input of size 416×416 yields 0.974 mAP for the two-class firearm detection problem and requires approximately 100ms per image. Overall we illustrate the comparative performance of these techniques and show that object localization strategies cope well with cluttered X-ray security imagery where classification techniques fail.

Item Type:Article
Full text:(AM) Accepted Manuscript
Download PDF
(4943Kb)
Status:Peer-reviewed
Publisher Web site:https://doi.org/10.1109/tifs.2018.2812196
Publisher statement:© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Date accepted:28 February 2018
Date deposited:01 March 2018
Date of first online publication:05 March 2018
Date first made open access:01 March 2018

Save or Share this output

Export:
Export
Look up in GoogleScholar