Abstract
Time-of-flight (ToF) imaging is widely used in consumer electronics for depth perception, with compact ToF sensors often representing their data as histograms of photon arrival times for each pixel. These histograms capture detailed temporal information that enables advanced computational techniques, such as super-resolution, to reconstruct high-resolution depth images even from low-resolution sensors by leveraging the full temporal structure of the data. However, transferring full histogram data is impractical for compact systems due to the large amount of data. To address this, microcontrollers extract a few key parameters-such as peak position, signal intensity, and noise level-greatly reducing data volume. While this approach performs well for low-resolution tasks like autofocus and obstacle detection, its potential for high-resolution depth imaging has not been fully explored. In this work, we demonstrate that these few extracted parameters are sufficient to reconstruct full high-resolution depth images. We propose a compact and data-efficient neural network that enhances the spatial resolution of a basic ToF sensor from 4 × 4 pixels to 32 × 32 pixels. By focusing on only 3 key parameters per pixel, compared to the original 144 histogram bins (range ToF sensor provides), representing a 48× reduction in data, our approach significantly reduces the data requirements while maintaining performance similar to methods that rely on full histogram data. Despite this drastic reduction in data, our method achieves high-resolution depth imaging with minimal performance loss, demonstrating the feasibility of efficient and high-quality depth reconstruction using only key extracted parameters.