When Does Computational Imaging Improve Performance?
IEEE Trans. on Image Processing
Performance analysis of computational imaging techniques such as defocus and motion deblurring and light field capture. Practical guidelines for imaging system design.
A number of computational imaging techniques have been introduced to improve image quality by increasing light throughput. These techniques use optical coding to measure a stronger signal level. However, the performance of these techniques is limited by the decoding step, which amplifies noise. While it is well understood that optical coding can increase performance at low light levels, little is known about the quantitative performance advantage of computational imaging in general settings. In this paper, we derive the performance bounds for various computational imaging techniques. We then discuss the implications of these bounds for several real-world scenarios (illumination conditions, scene properties and sensor noise characteristics). Our results show that computational imaging techniques do not provide a significant performance advantage when imaging with illumination brighter than typical daylight. These results can be readily used by practitioners to design the most suitable imaging systems given the application at hand.
IEEE Trans. on Image Processing
All CI techniques discussed in this paper can be modeled using a linear image formation model. In order to recover the desired image, these techniques require an additional decoding step, which amplifies noise. Impulse imaging techniques measure the signal directly without requiring any decoding. A stopped down aperture can be used to avoid defocus blur, a shorter exposure can be used to avoid motion blur, and a pin-hole mask can be placed near the sensor to directly measure the light field.
We analyze the performance of a variety of CI techniques and derive a bound on their performance in terms of SNR. We show that CI techniques provide a significant performance advantage only if the average signal level is significantly lower than the sensor read noise variance. In this we simulate the performance of several defocus deblurring cameras, motion deblurring cameras, and light field multiplexing cameras. All techniques perform at or below the performance bound derived in the paper.
We provide guidelines for when to use CI given an imaging scenario. The scenarios are defined in terms of the application (e.g., motion deblurring, defocus deblurring), real-world lighting (e.g., moonlit night or cloudy day, indoor or outdoor), scene properties (albedo, object velocities, depth range) and sensor characteristics. These figures show contour plot of the SNR gain bound for motion and defocus deblurring cameras. For both cameras, the SNR gain is always negligible when the illuminance is greater than 125 lux (typical indoor lighting).
We use simulations to compare the performance of flutter-shutter and impulse cameras (i.e. a camera with a short exposure). The top row of this figure shows an image blurred by the flutter sequence given in. The second and fourth rows show the results after deblurring with linear inversion and the BM3D algorithm, respectively. The third row shows the results from the impulse camera. The last row shows the results for denoising the images in the third row with the BM3D algorithm. The flutter shutter camera has higher SNR when the illumination is less than 100 lux.
We use simulations to compare the performance of focal sweep and impulse cameras (i.e. a camera with a stopped-down aperture). The top row shows an image blurred by a focal sweep PSF. The second and fourth rows show the results after deblurring with linear inversion and the BM3D algorithm, respectively. The third row shows the results from the impulse camera. The last row shows the results for denoising the images in the third row with the BM3D algorithm. The focal sweep camera always has a higher SNR than impulse imaging, but the improvement becomes negligible when illumination is greater than 100 lux.
We also provide empirical results using perceptually motivated metrics and regularized deblurring algorithms. Here we show performance for the MSE, SSIM, VIF, and UQI metrics. The top row shows performance for the focal sweep camera, and the bottom row shows performance for the flutter shutter camera. For each plot, the performance gain is plotted on a log scale. The black line corresponds to our derived performance bound. The magenta lines correspond to performance gain using direct linear inversion. The red, green, and blue curves correspond to reconstructions using Gaussian, TV, and BM3D priors. The bound derived in the paper is emperically found to be an upper bound for performance across all metrics and priors.
Share This Article