High-speed vision using single photon sensors

Academic Supervisor: Dr Istvan Gyongy

PhD Student: Jack Iain MacLean

In collaboration with: ST Microelectronics

Summary

Autonomous systems and robotics require the rapid extraction and consistent tracking of objects and features in an environment to operate safely and efficiently. Their vision systems must therefore operate at high frame rates and offer a sufficiently large Field of View (FoV).

Recent developments in the architecture of Time-of-Flight (ToF) sensors utilising Single Photon Avalanche Diodes (SPADs) enables depth data to be captured at hundreds of frames per second.

However, this high frame rate currently cannot be fully utilised due to bottleneck issues inherent in off-chip data processing. SPAD based sensors also suffer high power consumption because of the high data output due to the 3rd (depth) dimension. The goal of this project is to utilise Neural Networks to reduce the power consumption and latency of the SPAD based dToF sensor.

Traditionally, the problem of high data throughput was minimised using on-chip histogramming which compresses the detected photon arrival times into bins of a histogram in memory. The histogramming process can significantly reduce the data bottleneck enabling real time depth imaging.

However, this comes at the cost of increased power consumption and silicon area due to the added requirement of on-chip memory and Time-to-Digital Converters (TDCs). Our goal with this project is to implement a TDC-less (and therefore histogram-less) dToF sensor.

To bypass the use of a TDC and histogram memory, we use an event-based Spiking Neural Networks (SNNs) to process SPAD events (caused by the arrival of photons) directly. The network is trained using synthetic SPAD events, and while it performs at five-times lower precision in depth prediction that a classic centre-of-mass (CoM) algorithm it achieves similar Mean Absolute Error with estimate faster processing speeds and significantly lower power consumption.

Key results/outcomes

Trained and tested the SNN on synthetic data:

  • The SNN was able to determine a surface depth from detected photon events using a network with only 3,506 parameters under ambient conditions of 30 KLux and surface reflectivity’s as low as 25 %.

  • It achieved a precision of 0.1 m at a depth of 10 m and a mean error of less than 0.01 m.

  • The SNN has an estimated latency of 1 clock cycle and energy consumption of 204 pJ per depth measurement.

  • Please see https://arxiv.org/abs/2401.10793 for more information

Publications

Measurement and Instrumentation in Machine Vision - Ch.1 - Machine Learning Approaches for Single Photon Direct Time of Flight Imaging (Routledge, 2024)

Contact details

Academic Supervisor: Dr Istvan Gyongy

Email: istvan.gyongy@ed.ac.uk

PhD Student: Jack Iain MacLean

Email: J.I.MacLean@sms.ed.ac.uk