We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

Camera-Based System for the Automatic Detection of Vehicle Axle Count and Speed Using Convolutional Neural Networks

Miles, Victoria and Gurr, Francis and Giani, Stefano (2022) 'Camera-Based System for the Automatic Detection of Vehicle Axle Count and Speed Using Convolutional Neural Networks.', International Journal of Intelligent Transportation Systems Research, 20 (3). pp. 778-792.


This paper outlines the development of a nonintrusive alternative to current intelligent transportation systems using road-side video cameras. The use of video to determine the axle count and speed of vehicles traveling on major roads was investigated. Two instances of a convolutional neural network, YOLOv3, were trained to perform object detection for the purposes of axle detection and speed measurement, achieving accuracies of 95% and 98% mAP respectively. Outputs from the axle detection were processed to produce axle counts for each vehicle with 93% accuracy across all vehicles where all axles are visible. A simple Kalman filter was used to track the vehicles across the video frame, which worked well but struggled with longer periods of occlusion. The camera was calibrated for speed measurement using road markings in place of a reference object. The calibration method proved to be accurate, however, a constant error was introduced if the road markings were not consistent with the government specifications. The average vehicle speeds calculated were within the expected range. Both models achieved real-time speed performance.

Item Type:Article
Full text:Publisher-imposed embargo
(AM) Accepted Manuscript
File format - PDF
Full text:(VoR) Version of Record
Available under License - Creative Commons Attribution 4.0.
Download PDF
Publisher Web site:
Publisher statement:This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
Date accepted:09 September 2022
Date deposited:02 September 2022
Date of first online publication:17 September 2022
Date first made open access:09 November 2022

Save or Share this output

Look up in GoogleScholar