Search
  • Six Metrix

Time of flight LiDAR Digital Signal Processing

Updated: Sep 5, 2019


The use of LiDAR (short for Light Detection and Ranging) is becoming widespread in various applications such as autonomous vehicles, high resolution 2D/3D mapping, security, etc. Unlike RADARs, LiDAR systems make use of the infrared part of the EM spectrum. This allows them to operate at very short wavelength and thus capture smaller environment details than RADARs. LiDAR designers face challenges ranging from the handling of high frequency currents at the output of photodiode elements to the extraction of relevant information from the environment surrounding the system.


The objective of this article is to bring an insight view into some of those challenges. To do so, we will start with a brief review of what is the so-called time of flight LiDAR. Then, in the next part of this article, we will address the more in-depth challenges of LiDAR signal acquisition and processing. Understanding these challenges should help you improve your design which ultimately results in increased performances for your LiDAR application.


Time of Flight LiDAR Basics


Put simply, a time of flight (TOF) LiDAR estimates the distance between itself and objects in front of it by measuring the TOF i.e. the time between the moment the signal was emitted and the moment one of its reflections (also known as echoes) returns to the system. An example of this is presented on figure 1 where the LiDAR system sends a signal which is reflected on a person and returned to the LiDAR.











Figure 1 - Example of ToF measurement


After measuring the time of flight, the distance between the LiDAR and the object is obtained using the equation



where speed is defined as the speed of light in the propagation material (typically air or void) and where we must divide by two to account for the round trip. For instance, given a time of flight of 100 nanoseconds and assuming the speed of light in vacuum, the object distance would be 15 meters. The measurement illustrated on figure 1 is typically performed several thousands of times per second in a LiDAR system using a well chosen combination of components and subsystems.


The architecture of a TOF LiDAR is illustrated on figure 2. LiDAR systems can generally be divided into an optical design, an analog front end and a digital back end. The complexity of each of these typically depends on the type of LiDAR. Several more parts can be added but these are usually common grounds between all types of LiDARs.
















Figure 2 - TOF LiDAR Architecture


The analog front end is generally made of, among other subsystems, an IR emitter source, and an IR photodetector. Since LiDARs use their own illumination source, they are much less sensitive to external light variations than cameras. In practice, when properly designed, there is no significant difference between day and night measurements.


Emitter architecture: The emitter subsystem is designed to emit several laser pulses per second and serves as the illumination source to generate the Field of Illumination (FOI) i.e. the zone where signals are sent in the environment around the sensor. This subsystem combines fast light emitting diodes or laser diodes with a fast switching high voltage drive. The emitter subsystem is responsible for several properties of the emitted signal such as rise time, pulse width and intensity. These parameters are typically chosen and a well-designed emitter circuit will transfer these properties from electrical energy to light energy with high fidelity. In the TOF LiDAR signal acquisition section we will be discussing how the emitter circuit is controlled by the digital back end to emit laser pulses at a given frequency and in the TOF LiDAR signal processing section we will be discussing how the emitted signal rise time, pulse width and intensity affect the system’s accuracy.


Receiver architecture: The receiver subsystem is designed to capture the echoes (reflected signals) coming from the LiDAR Field Of View (FOV). The first stage of a receiver subsystem is usually some type of photodiode element such as a PIN or a SiPM to convert light energy into electrical energy in the form of a high frequency low intensity current. In practice, several of these photodiodes can be combined in a single LiDAR unit to improve the FOV angular resolution. The current signal is first converted into a voltage signal to ease its processing. Then, several analog processing stages can be used for amplification or other signal handling operations. A typical receiver processing chain for one photodiode element is illustrated on figure 3. All these stages need to be carefully designed to maximize the signal-to-noise ratio and minimize the receiver induced noise and guarantee signal integrity. One common challenge in LiDAR analog design is to optimize the receiver dynamic range so that signal saturation (or more generally information loss) is minimized. Without going into electronic details, we shall come back to these concepts in the next part of this article and look how they affect the LiDAR performances.













Figure 3 - Typical receiver subsystem


The optical design can be seen as the “optical glue” between the emitter subsystem and the receiver subsystem of the analog front end. Its role is to shape the fields of view and illumination, co-register them in space (align them) and filter out of interest wavelengths. From now on we will suppose the FOI and FOV to be perfectly aligned in space (the ideal case) and refer only to the FOV. Figure 4 shows an example of what a typical LiDAR FOV might look like. Each rectangle represents a pixel. A pixel is the smallest unit of space in the FOV, it defines the minimal resolution of the FOV (just like the number of pixels in a camera system). Pixel height (Δϕ) and pixel width (Δθ) are defined in terms of degrees and are referred to as the angular resolution.






Figure 4 - LiDAR field of view example


The number of pixels equals the number of “pixel lines” times the number of “pixel columns” (nlines x ncolumns). In the case of figure 4, the field of view has 4 pixel lines and 1 pixel column (4 x 1) and thus, a total of 4 pixels. Assuming a continuous FOV (no dead zone), the number of lines and columns is related to the field of view size by



where FOVv and FOVh are the total field of view horizontal and vertical angles.


Here, we shall make a distinction between flash LiDARs and scanned LiDARs. A flash LiDAR typically has a fixed FOV, which means it is always looking in the same direction. A scanned LiDAR is basically a flash LiDAR with some added features to perform a scan on the FOV. This scan ability can have the effect of increasing the total field of view. For instance, the LiDAR on figure 4 could be rotating around itself to increase the horizontal component of its FOV to 360 degrees. In the same example, if the pixel width is defined as 0.1°, the resulting FOV would be of dimensions 4 x 3600, meaning the LiDAR scans 3600 pixel columns around itself! This has the effect of greatly improving the field of view and LiDAR angular accuracy.


When pixels get larger, either because we are far away from the system or because they are defined that way, it becomes hard to estimate with precision the angular position of an object. It also becomes possible for multiple echoes to be returned by a single pixel, creating interesting challenges when processing the received signal. We shall come back to this later in the TOF LiDAR signal processing section.


The digital back end is often composed of a signal acquisition manager subsystem and a digital signal processing subsystem. The complexity of each of them varies greatly and depends on the intended application. These two subsystems will be reviewed in details in the next parts of this article. Meanwhile, we will conclude this part by presenting two techniques that are used in the LiDAR industry to digitize analog signals:


The use of a time-to-digital converter a.k.a. TDC: This piece of hardware is generally designed around an analog comparator which compares the analog signal amplitude to a certain threshold. When the signal amplitude reaches the threshold, the comparator outputs a logical one to indicate that a detection has occurred. Although very few information related to the analog signal is captured when using a TDC, it has the great advantage of being much cheaper than an ADC.


The use of an analog-to-digital converter a.k.a. ADC: An analog-to-digital converters are complex pieces of hardware which rely on different methods to convert the analog signal into a digital signal. Although ADCs are more expensive than TDCs, they allow much more information to transfer from the analog domain to the digital domain. For instance, signal intensity and/or signal-to-noise ratio (SNR) can be estimated when working with a digital signal sampled by an ADC. We shall discuss some concepts you should consider to choose the right ADC for your LiDAR in the next parts of this article.


Next Part: TOF LiDAR signal acquisition and signal processing

Recent Posts

See All

© 2021 Six Metrics Inc.
All rights reserved.

  • Black LinkedIn Icon