Multiplexed Illumination for Scene Recovery under Global Illumination

Global illumination effects such as inter-reflections and subsurface scattering result in systematic, and often significant errors in scene recovery using active illumination. Recently, it was shown that the direct and global components could be separated efficiently for a scene illuminated with a single light source. In this project, we study the problem of direct-global separation for multiple light sources. We derive a theoretical lower bound for the number of required images, and propose a multiplexed illumination scheme. We analyze the signal-to-noise ratio (SNR) characteristics of the proposed illumination multiplexing method in the context of direct-global separation. We apply our method to several scene recovery techniques requiring multiple light sources, including shape from shading, structured light 3D scanning, photometric stereo, and reflectance estimation. Both simulation and experimental results show that the proposed method can accurately recover scene information with fewer images compared to sequentially separating direct-global components for each light source.

Publications

Multiplexed Illumination for Scene Recovery in the Presence of Global Illumination

J Gu, T Kobayashi, Mohit Gupta, Shree Nayar

Proc. ICCV 2011

oral presentation

(De) Focusing on Global Light Transport for Active Scene Recovery

Mohit Gupta, Y Tian, S Narasimhan, Li Zhang

Proc. CVPR 2009

oral presentation

ICCV 2011 Supplementary Video

This video include more experimental results. (With narration)

Scene recovery results for a V-groove in several applications

(a) shape from shading (one source); (b) intensity ratio (two sources); (c) phase shifting (three sources); and (d) photometric stereo (three sources). Row 1: One of the captured images without direct-global separation. Row 2: The separated direct component using our method. Row 3: Recovered depth profiles. Our method faithfully recovers scene information, while requiring fewer images than applying the separation method [Nayar 2006] sequentially.

Projected light patterns and captured images for phase shifting

(a) The amplitudes for the three (collocated) light sources. (b) We modulate the three light sources with high frequency sinusoids shifting over time and simultaneously project the modulated light patterns. (c) The corresponding captured images.

BRDF and surface normal estimation of a shiny cake mold

In this example, we used N=9 lights to recover the BRDF and surface normal map for a concave, shiny cake mold. Column 1: One of the direct components (for no separation, it is one of the captured image). Column 2: Recovered surface normal map (color coded). Column 3: Estimated BRDF (rendered as a sphere under natural environment lighting). Column 4: Rendered images with the estimated BRDF and surface normals. Column 5: Recovered depth for the selected region (red rectangle).

Recovery of normals and depths of a banana using photometric stereo

The banana skin is translucent, resulting in sub-surface scattering. Without separating the global illumination, the mean square error in the recovered depth is 19%. With our method, the error is 4%.

Depth recovery of a pop-up book using phase-shifting

The scene exhibits strong inter-reflections resulting in large errors in the recovered depth. Our method removes Interreflections, thus reducing the depth errors significantly.

Signal-to-Noise Ratio (SNR) characteristics of the proposed method

The x-axis is the ratio between the standard deviation of the photon noise (σp) and read noise (σr). The y-axis is the SNR gain of the proposed method with respect to the sequential separation method [Nayar 2006]. The red line is the theoretical result, and the blue line is the simulation result (for 30 light sources). As expected, the SNR gain is √(2N/3) if read noise dominates, and it reduces as the photon noise increases, approaching the asymptotic value of 0.83.

Comments

Share This Article