Quanta Burst Photography

Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons, and capturing their time-of-arrival with high timing precision. While these sensors were limited to single-pixel or low-resolution devices in the past, recently, large (up to 1 megapixel) SPAD arrays have been developed. These single-photon cameras (SPCs) are capable of capturing high-speed sequences of binary single-photon images with no read noise. We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions, including ultra low-light and fast motion. Inspired by recent success of conventional burst photography, we design algorithms that align and merge binary sequences captured by SPCs into intensity images with minimal motion blur and artifacts, high signal-to-noise ratio (SNR), and high dynamic range. We theoretically analyze the SNR and dynamic range of quanta burst photography, and identify the imaging regimes where it provides significant benefits. We demonstrate, via a recently developed SPAD array, that the proposed method is able to generate high-quality images for scenes with challenging lighting, complex geometries, high dynamic range and moving objects. With the ongoing development of SPAD arrays, we envision quanta burst photography will find applications in both consumer and scientific photography. Source code: https://github.com/sizhuom/quanta-burst-photography

Publications

Quanta Burst Photography

Sizhuo Ma, S Gupta, Arin C. Ulku, Claudio Bruschini, Edoardo Charbon, Mohit Gupta

Proc. SIGGRAPH 2020 (ACM Trans. on Graphics)

SIGGRAPH technical papers highlights

University news coverage

Passive Inter-Photon Imaging

Atul Ingle, Trevor Seets, Mauro Buttafava, Shantanu Gupta, Alberto Tosi, Mohit Gupta*, Andreas Velten*

Proc. CVPR 2021

oral presentation

High Flux Passive Imaging with Single-Photon Sensors

Atul Ingle, A Velten, Mohit Gupta

Proc. CVPR 2019

oral presentation

Asynchronous Single-Photon 3D Imaging

Anant Gupta*, Atul Ingle*, Mohit Gupta

Proc. ICCV 2019

Marr Prize (best paper) honorable mention

oral presentation

Photon-Flooded Single-Photon 3D Cameras

Anant Gupta, Atul Ingle, A Velten, Mohit Gupta

Proc. CVPR 2019

oral presentation

Passive Single-Photon Imaging

Conventional image sensors need to collect 100-1000 photons in order to generate a clear image. Single-photon image sensors are novel sensors that are sensitive to the arrivals of single photon arrivals. They capture binary frames which record if a photon arrives at each pixel during the exposure time. If we take a continuous burst of binary frames, the average number of detected photons depends nonlinearly on the incoming light intensity, and a linear intensity image can be recovered by inverting this nonlinear response curve. However, this does not take into account possible scene/camera motion between binary frames, which results in motion blur.

Proposed: Quanta Burst Photography

We propose quanta burst photography, a computational photography technique that computationally re-aligns the photons along motion trajectories, for achieving high-quality images in challenging scenarios, including low-light and high-speed motion. We develop algorithms that align the binary frames, thus creating a high-bit-depth, high-dynamic-range, super-resolved image, while minimizing noise and motion blur.

Experimental Results: Various Challenging Scenes

We show experimental results for a range of challenging scenes, including depth variation, specular highlights, complex geometry and fine structures. Naive averaging without motion compensation results in heavily-blurred images. The proposed method is able to align the frames accurately and generate high-quality images with low blur and noise.

Experimental Results: Super-Resolution

By measuring the sub-pixel motion between frames, the proposed method is able to generate a super-resolved image at 2x resolution, with less aliasing artifacts and sharper edges.

Experimental Results: High Dynamic Range

We capture a scene where the light source (the lamp) is directly visible in the image. Quanta burst photography achieves very high dynamic range and is able to recover the details of the filament and the text on the plaque at the same time.

Experimental Results: Nonrigid Scene Motion

Here we show a person plucking the lowest two strings of a guitar. Averaging the captured binary sequence results in either ghosting artifacts or a low SNR. Our method is able to reconstruct a clear, sharp image despite fast and non-rigid scene motion.

When to Use Quanta Burst Photography?

Beyond empirical results, we also propose a theoretical framework to analyze the SNR performance of conventional and quanta burst photography for different lighting conditions and motion speeds. Here we show some example plots showing the SNR difference between current/projected SPADs and commercial sub-electron CMOS sensors. This framework can be used to direct future development of single-photon cameras for best performance under certain light levels and amount of motion.

Resources

Presentation Slides

FAQs

What are single-photon cameras?

Single-photon cameras are novel cameras that are extremely sensitive to light and can capture single photon arrivals. Currently, there are two main enabling technologies for large single-photon camera arrays: SPADs and jots. SPADs achieve single photon sensitivity by amplifying the weak signal from each incident photon via avalanche multiplication, which enables zero read noise and extremely high frame rate (~100kfps). Jots, on the other hand, amplify the single-photon signal by using an active pixel with high conversion gain (low capacitance). By avoiding avalanche, jots achieve smaller pixel pitch, higher quantum efficiency and lower dark current, but have lower temporal resolution.

Although the techniques proposed in this project are applicable to both SPADs and jots, we primarily focus on SPADs because of their capability to resolve fast motion due to high temporal resolution.

What is quanta burst photography (QBP)?

Quanta burst photography (QBP) is a computational photography technique that measures the motion between binary frames captured by single-photon cameras and merge them into a high-quality intensity image with low motion and blur and high dynamic range.

How is quanta burst photography different from burst mode on my smartphone camera?

Many commercial smartphone cameras such as Apple iPhone and Google Pixel support “burst mode” for capturing fast moving scenes. They capture a sequence of images in rapid succession when the user clicks a photo and rapidly merge the image sequence to produce a high quality image. Our work is inspired by ideas from burst photography. We take it to an extreme limit of individual photons per pixel in each burst frame. The extreme sensitivity and high speed of a quanta burst camera enables capturing binary frames at extremely high frame rates with sensitivity down to individual photons. This means quanta burst photography can capture fast moving objects in extremely low light which is beyond the capabilities of current smartphone cameras.

What about still scenes?

In this work we focus on scenes that contain significant motion. If we know a priori that the scene is perfectly still, then quanta burst photography will perform as well as a conventional camera assuming sufficient light in the scene. In case of extremely low illumination conditions, below the noise floor of conventional camera pixels, quanta burst photography can still provide improved performance due to its extreme sensitivity and low noise.

Is this method useful for scenarios other than low-light photography?

We have recently shown that, surprisingly, sensing individual photons can provide extended dynamic range under extremely high illumination conditions. By relying on the non-linear film-like response of these single-photon sensors, we show that it is possible to capture much higher brightness levels, orders of magnitude beyond the saturation limit of conventional image sensors. For more details please visit this project page: From One Photon to a Billion: High Flux Passive Imaging with Single-Photon Sensors.

Should I use SPADs or jots for quanta burst photography?

We compare the performance of SPADs and jots-based quanta burst photography in simulated sequences. Under fast motion, the merged image from jots contains motion blur, while SPADs are able to register the binary images and merge them into a sharp image. On the other hand, when the motion is slow, jots are able to generate a sharper image due to their high spatial resolution. Therefore, we envision these two technologies to complement each other: SPADs achieve higher performance in high-speed scenarios, while jots with projected high resolution will achieve better image quality for scenes with relatively slow motion and high-frequency texture details. Comparison between the two technologies on real data remains for future work.

Is it difficult to align the binary frames?

It is indeed difficult to match the binary frames directly due to their low dynamic range (1-bit) and highly randomness. Therefore, we divide the entire sequence into temporal blocks (e.g. every consecutive 100 frames), compute the sum image of each block and align the sum images. Then the motion field between blocks is further interpolated to get the intra-frame motion. Furthermore, a robust frequency-space merging algorithm is used to reduce the visual artifacts introduced by alignment errors.

What kinds of motion artifacts can QBP correct?

The proposed algorithm assumes certain level of spatio-temporal smoothness on the image-space motion field (patchwise 2D translation and constant velocities within blocks). When this assumption does not hold, the discrepancies between the true deformation of the patch and the translation approximation can be mitigated by the robust merging algorithm.  The proposed algorithm has shown robust performance for global 6DoF handheld camera motion and moderate amount of local, non-rigid scene motion (e.g. guitar player example). An interesting future research direction is to design optical flow algorithms for aligning images for challenging scenes including fast changing global motion (e.g. drone) and highly non-rigid scene motion (e.g. cloth, air turbulence).

How fast is the algorithm?

Currently, our algorithm is not optimized for real-time implementation. Our MATLAB code takes about 30 minutes to process a sequence with 10,000 binary frames. One important future direction is to implement quanta burst photography in a fast and also energy-efficient way, which will be critical for consumer photography applications.

How much data does this method require? Is bandwidth a challenge?

Our quanta burst camera prototype captures about 100,000 frames per second. The high dynamic range and temporal resolution of SPADs comes at the cost of large bandwidth requirement. Currently, the captured binary images are stored on a local memory on the camera board, and later transferred to a PC via USB 3.0 for offline processing. The bandwidth requirement can be relaxed to some extent by capturing multi-bit images and sacrificing some temporal resolution. The bandwidth in future SPAD sensors can also be improved by using faster interfaces such as PCI Express or CameraLink.

Can quanta burst photography be used for 3D Imaging?

In this work we focus on imaging in passive illumination conditions where the camera does not have its own light source. Indeed, when used in conjunction with an active light source such as a pulsed laser, the high sensitivity and timing resolution of single-photon sensors can be used to capture high resolution 3D information about the scene. For more details about our work in 3D imaging with single-photon cameras, please visit our project page: http://www.SinglePhoton3DImaging.com

Can quanta burst cameras capture only black & white photos?

The hardware demonstration in our paper was limited to black & white photos because our prototype camera only captures monochrome images. This is not a fundamental limitation of our technique. Our technique can be easily extended to color imaging using standard color capture techniques widely used in conventional cameras (e.g., an RGB Bayer filter pattern on the image sensor).

Comments

Share This Article