We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

Multi-Modal Learning for Real-Time Automotive Semantic Foggy Scene Understanding via Domain Adaptation

Alshammari, N. and Akcay, S. and Breckon, T.P. (2021) 'Multi-Modal Learning for Real-Time Automotive Semantic Foggy Scene Understanding via Domain Adaptation.', IEEE Intelligent Transportation Systems Society.


Robust semantic scene segmentation for automotive applications is a challenging problem in two key aspects: (1) labelling every individual scene pixel and (2) performing this task under unstable weather and illumination changes (e.g., foggy weather), which results in poor outdoor scene visibility. Such visibility limitations lead to non-optimal performance of generalised deep convolutional neural network-based semantic scene segmentation. In this paper, we propose an efficient endto-end automotive semantic scene understanding approach that is robust to foggy weather conditions. As an end-to-end pipeline, our proposed approach provides: (1) the transformation of imagery from foggy to clear weather conditions using a domain transfer approach (correcting for poor visibility) and (2) semantically segmenting the scene using a competitive encoderdecoder architecture with low computational complexity (enabling real-time performance). Our approach incorporates RGB colour, depth and luminance images via distinct encoders with dense connectivity and features fusion to effectively exploit information from different inputs, which contributes to an optimal feature representation within the overall model. Using this architectural formulation with dense skip connections, our model achieves comparable performance to contemporary approaches at a fraction of the overall model complexity.

Item Type:Conference item (Paper)
Full text:Publisher-imposed embargo
(AM) Accepted Manuscript
File format - PDF
Publisher Web site:
Date accepted:23 April 2021
Date deposited:28 May 2021
Date of first online publication:11 July 2021
Date first made open access:No date available

Save or Share this output

Look up in GoogleScholar