Skip to main content

Research Repository

Advanced Search

Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer

Atapour-Abarghouei, A.; Breckon, T.P.

Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer Thumbnail


Authors



Abstract

Monocular depth estimation using learning-based approaches has become promising in recent years. However, most monocular depth estimators either need to rely on large quantities of ground truth depth data, which is extremely expensive and difficult to obtain, or predict disparity as an intermediary step using a secondary supervisory signal leading to blurring and other artefacts. Training a depth estimation model using pixel-perfect synthetic data can resolve most of these issues but introduces the problem of domain bias. This is the inability to apply a model trained on synthetic data to real-world scenarios. With advances in image style transfer and its connections with domain adaptation (Maximum Mean Discrepancy), we take advantage of style transfer and adversarial training to predict pixel perfect depth from a single real-world color image based on training over a large corpus of synthetic environment data. Experimental results indicate the efficacy of our approach compared to contemporary state-of-the-art techniques.

Citation

Atapour-Abarghouei, A., & Breckon, T. (2018). Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer. In Proc. Computer Vision and Pattern Recognition (2800-2810). https://doi.org/10.1109/CVPR.2018.00296

Conference Name 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Conference Location Salt Lake City, Utah, USA
Start Date Jun 18, 2018
End Date Jun 22, 2018
Acceptance Date Feb 19, 2018
Online Publication Date Dec 17, 2018
Publication Date 2018
Deposit Date Mar 19, 2018
Publicly Available Date Mar 20, 2018
Pages 2800-2810
Series ISSN 2575-7075
Book Title Proc. Computer Vision and Pattern Recognition
DOI https://doi.org/10.1109/CVPR.2018.00296
Keywords monocular depth, generative adversarial network, GAN, depth map, disparity, depth from single image, style transfer
Public URL https://durham-repository.worktribe.com/output/1145708
Publisher URL https://breckon.org/toby/publications/papers/abarghouei18monocular.pdf

Files

Accepted Conference Proceeding (4.2 Mb)
PDF

Copyright Statement
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.





You might also like



Downloadable Citations