We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

Improved depth recovery in consumer depth cameras via disparity space fusion within cross-spectral stereo.

Payen de La Garanderie, G. and Breckon, T.P. (2014) 'Improved depth recovery in consumer depth cameras via disparity space fusion within cross-spectral stereo.', in Proceedings of the British Machine Vision Conference. , p. 107.


We address the issue of improving depth coverage in consumer depth cameras based on the combined use of cross-spectral stereo and near infra-red structured light sensing. Specifically we show that fusion of disparity over these modalities, within the disparity space image, prior to disparity optimization facilitates the recovery of scene depth information in regions where structured light sensing fails. We show that this joint approach, leveraging disparity information from both structured light and cross-spectral sensing, facilitates the joint recovery of global scene depth comprising both texture-less object depth, where conventional stereo otherwise fails, and highly reflective object depth, where structured light (and similar) active sensing commonly fails. The proposed solution is illustrated using dense gradient feature matching and shown to outperform prior approaches that use late-stage fused cross-spectral stereo depth as a facet of improved sensing for consumer depth cameras.

Item Type:Book chapter
Full text:(VoR) Version of Record
Download PDF
Publisher Web site:
Date accepted:No date available
Date deposited:04 February 2015
Date of first online publication:September 2014
Date first made open access:No date available

Save or Share this output

Look up in GoogleScholar