Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer.

Atapour-Abarghouei, A. and Breckon, T.P. (2018) 'Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer.', in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018), 18-22 June 2018, Salt Lake City, Utah. Piscataway, NJ: IEEE, pp. 2800-2810.

Abstract

Monocular depth estimation using learning-based approaches has become promising in recent years. However, most monocular depth estimators either need to rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic data can resolve most of these issues but introduces the problem of domain bias. This is the inability to apply a model trained on synthetic data to real-world scenarios. With advances in image style transfer and its connections with domain adaptation (Maximum Mean Discrepancy), we take advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data. Experimental results indicate the efficacy of our approach compared to contemporary state-of-the-art techniques.

Item Type:Book chapter
Keywords:Monocular depth, Generative adversarial network, GAN, Depth map, Disparity, Depth from single image
Full text:(AM) Accepted Manuscript
Download PDF
(4070Kb)
Status:Peer-reviewed
Publisher Web site:https://doi.org/10.1109/CVPR.2018.00296
Publisher statement:© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Date accepted:19 February 2018
Date deposited:20 March 2018
Date of first online publication:17 December 2018
Date first made open access:29 November 2021

Save or Share this output

Export:
Export
Look up in GoogleScholar