Every time you take a photograph, your camera detects more than a billion photons. For a basic one-megapixel camera, that's more than 1,000 photons per pixel. Now in a new study, researchers have developed an algorithm that is so efficient that it can generate high-quality 3D images using a single-photon camera that detects just one signal photon per pixel.
The researchers, led by Jeffrey Shapiro, a professor of electrical engineering and computer science at the Massachusetts Institute of Technology (MIT), along with coauthors at MIT, Politecnico di Milano, and Boston University, have published a paper on the new photon-efficient approach to imaging with a single-photon camera in a recent issue of Nature Communications.
Reconstructing a scene’s 3D structure and reflectivity accurately with an active imaging system operating in low-light-level conditions has wide-ranging applications, spanning biological imaging to remote sensing. Here we propose and experimentally demonstrate a depth and reflectivity imaging system with a single-photon camera that generates high-quality images from ~1 detected signal photon per pixel. Previous achievements of similar photon efficiency have been with conventional raster-scanning data collection using single-pixel photon counters capable of ~10-ps time tagging. In contrast, our camera’s detector array requires highly parallelized time-to-digital conversions with photon time-tagging accuracy limited to ~ns. Thus, we develop an array-specific algorithm that converts coarsely time-binned photon detections to highly accurate scene depth and reflectivity by exploiting both the transverse smoothness and longitudinal sparsity of natural scenes. By overcoming the coarse time resolution of the array, our framework uniquely achieves high photon efficiency in a relatively short acquisition time.