Depth from diffusion

Changyin Zhou*, Oliver Cossairt, Shree Nayar

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

40 Scopus citations

Abstract

An optical diffuser is an element that scatters light and is commonly used to soften or shape illumination. In this paper, we propose a novel depth estimation method that places a diffuser in the scene prior to image capture. We call this approach depth-from-diffusion (DFDiff). We show that DFDiff is analogous to conventional depth-from-defocus (DFD), where the scatter angle of the diffuser determines the effective aperture of the system. The main benefit of DFDiff is that while DFD requires very large apertures to improve depth sensitivity, DFDiff only requires an increase in the diffusion angle - a much less expensive proposition. We perform a detailed analysis of the image formation properties of a DFDiff system, and show a variety of examples demonstrating greater precision in depth estimation when using DFDiff.

Original languageEnglish (US)
Title of host publication2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2010
Pages1110-1117
Number of pages8
DOIs
StatePublished - 2010
Event2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2010 - San Francisco, CA, United States
Duration: Jun 13 2010Jun 18 2010

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Other

Other2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2010
Country/TerritoryUnited States
CitySan Francisco, CA
Period6/13/106/18/10

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Depth from diffusion'. Together they form a unique fingerprint.

Cite this