Mathematical approach and detecting devices for Automotive LIDAR
by Slawomir Piatek & Jake Li
Many advancements have been made in sensor technology, imaging, radar, LiDAR (light detection and ranging), electronics, GPS, and artificial intelligence. These improved technologies have enabled development of dozens of advanced driver assistance systems (ADAS), for example, collision avoidance system, blind spot monitor, lane departure warning, or park assist. Synchronizing the operation of such systems, or sensor fusion, opens an intriguing possibility of fully autonomous or self-driving vehicles able to monitor their surroundings and warn drivers of the potential road hazards, or even take an evasive action to prevent the collision independently of the driver. If the driver is not present, artificial intelligence of the vehicle must make optimal decisions.
The capability of self-driving requires an autonomous vehicle to differentiate and recognize objects ahead at high-speed conditions. Using distance-gauging technology, it must rapidly construct a 3D map up to a distance of about 100 m, as well as create high angular resolution imagery of up to 200−250 m.
One of several basic approaches to this task measures round-trip time of flight (ToF) of a pulse of energy traveling from the autonomous vehicle to the target and back to the autonomous vehicle. Distance to the reflection point can be calculated when one knows the speed of the pulse through the air. This pulse can be ultrasound (sonar), radio wave (radar), or light (LiDAR).
Of these three ToF techniques, LiDAR is the best choice to provide higher angular resolution imagery because its smaller diffraction (beam divergence) allows better recognition of adjacent objects compared to radar (see Figure 1). This higher angular resolution is especially important at high speed to provide enough time to respond to a potential hazard, such as head-on collision.
Figure 1: Beam divergence depends on the ratio of the wavelength and aperture diameter of the emitting antenna (radar) or lens (LiDAR). This ratio is larger for radar producing larger beam divergence and, therefore, smaller angular resolution. In the figure, the radar (black) would not be able to differentiate between the two cars, while LiDAR (red) would.
Figure 2 explains the basic concept of ToF LiDAR. The laser emits a pulse of light of duration τ. At the instant of emission, the internal clock in the timing circuit activates. The pulse of light reflects from the target and reaches the photodetector. The electrical output from the photodetector deactivates the clock. Electronically measured round-trip time of flight Δt allows one to calculate the distance to the reflection point. If the laser and the photodetector are practically at the same location, the distance is given by
=12Δ (Equation 1)
where c is the speed of light in vacuum and n is the index of refraction of the propagation medium (for air, ≈1). Two factors affect the distance resolution, Δ: the uncertainty Δ in measuring Δ and the spatial width of the pulse (=), if the diameter of the laser spot is larger than the size of the target feature to be resolved. The first factor implies Δ=12Δ, whereas, the second Δ=12=12. If the distance is to be measured with a resolution of 5 cm, the above relations separately imply that Δ≈300 ps and ≈300 ps. ToF LiDAR requires photodetectors and detection electronics with small time jitter (the main contributor to Δ) and lasers capable of emitting short-duration pulses. Picosecond lasers are very expensive. A laser in a typical automotive LiDAR produces pulses of about 4-ns duration, so a minimal beam divergence is essential.
One of the most critical choices the designer of an automotive LiDAR system must make is the light wavelength. Several factors constrain this choice: safety to human vision, interaction with the atmosphere, availability of lasers, and availability of photodetectors. The two most popular wavelengths are 905 nm and 1550 nm.
The primary advantage of 905 nm is that silicon absorbs photons at this wavelength, while it does not at 1550 nm. Silicon-based photodetectors are generally less expensive than the InGaAs infrared photodetectors needed to detect 1550-nm light. However, the higher human-vision safety of 1550 nm allows the use of lasers with a larger radiant energy per pulse, an important factor in the photon budget.
Atmospheric attenuation (under all weather conditions), scattering from airborne particles, and reflectance from target surfaces are wavelength dependent. This is a complex issue for automotive LiDAR because of the myriad of possible weather conditions and types of reflecting surfaces. Under most realistic settings, loss of light at 905 nm is less than at 1550 nm because water absorption is stronger at 1550 nm than at 905 nm .
Only a small fraction of photons emitted in a pulse ever reach the active area of the photodetector. If the atmospheric attenuation does not vary along the pulse’s path, the beam divergence of the laser light is negligible, the illumination spot is smaller than the target, the angle of incidence is zero, and the reflection is Lambertian, then the optical received peak power () is :
()=0020(−2) (Equation 2)
where 0 is the optical peak power of the emitted laser pulse, is the reflectivity of the target, 0 is the receiver’s aperture area, 0 is the detection optics’ spectral transmission, and is the atmospheric extinction coefficient. Equation 2 shows that the received power rapidly decreases with increasing distance . For a reasonable choice of the parameters and =100 m, the number of returning photons on the detector’s active area is on the order of a few hundred to a few thousands from more than 1020 typically emitted. These photons compete for detection with background photons carrying no useful information. Using a narrow-band filter can reduce the amount of background light reaching the detector, but the amount cannot be reduced to zero. The effect of the background is the reduction of the detection dynamic range and higher noise (background photon shot noise). It’s noteworthy that the terrestrial solar irradiance under typical conditions is less at 1550 nm than at 905 nm.
Creating a 3D map in the surrounding 360°×20° strip requires either a laser beam or multiple beams raster the strip, or that the laser floods the scene with light. The former approach is known as “scanning” LiDAR and the latter as “flash” LiDAR. There are two approaches in constructing a scanning LiDAR. In one, exemplified by Velodyne LiDAR, the LiDAR platform, located on the roof of the autonomous vehicle, rotates at the rate of 300-900 RPM while emitting pulses from 64 905-nm laser diodes. Each beam has a dedicated APD detector. Another idea is to use a rotating multi-faceted mirror with each facet at slightly different tilt angle to steer a single beam of pulses in different azimuthal and declinational angles. A major disadvantage of the two designs is patently moving parts. In a mechanically rough and tough driving environment, such moving parts represent a failure risk.
In another approach to a scanning LiDAR, a beam can be steered by a tiny mirror, part of a micro-electro-mechanical system (MEMS), with 2D orientation that can be controlled electrically. MEMS mirrors can be used to simultaneously steer multiple beams. Although technically there are still moving parts (oscillating mirror), the amplitude of the oscillation is small and the frequency is high enough to prevent mechanical resonances between the MEMS mirror and the car. The confined geometry of the MEMS mirror constrains the mirror’s oscillation amplitude, which translates into the limited field of view— a disadvantage of this approach. Nevertheless, this method is gaining interest due to its low cost and proven technology.
Optical phased array (OPA), the third competing scanning LiDAR technique, is becoming popular due to its truly “no-moving-parts” design, and therefore is considered to be more reliable than others. These are arrays of optical antenna elements that are equally illuminated by coherent light. The beam steering can be achieved by independently controlling the phase and amplitude of the re-emitted light by each element. Far-field interference produces a desired illumination pattern from a single beam to multiple beams. The loss of light caused by the various components of OPA restricts the useable range of LiDAR with OPA.
Flash LiDAR floods the scene with light, though the illumination region matches the field of view of the detector. The detector is an array of APDs at the focal plane of the detection optics. Each APD independently measures ToF to the target feature imaged on that APD. This is a truly “no-moving-parts” approach where the tangential resolution is limited by the pixel size of the 2D detector. The major disadvantage is photon budget: once the distance is more than a few tens of meters, the amount of returning light is too small for reliable detection. The budget can be improved at the expense of tangential resolution if instead of flooding the scene with photons, structured light – a grid of points – illuminates it. Vertically emitting laser diodes (VCSELs) make it possible to create projectors emitting thousands of beams simultaneously in different directions.
ToF LiDAR is susceptible to noise because of the weakness of the returned pulses and wide bandwidth of the detection electronics. Threshold triggering can produce errors in measuring Δ. Frequency modulated continuous wave (FMCW) LiDAR is an interesting alternative that is gaining interest. It is inspired by a FMCW radar, or chirped radar, that is in common use today. In chirped radar, the antenna continuously emits radio waves whose frequency is modulated, for example, linearly increasing from 0 to over time and then linearly decreasing from to 0 over time . If the wave reflects from a moving object at some distance and comes back to the emission point, its instantaneous frequency will differ from the one being emitted at that instant. The difference is due to two factors: the distance to the object and its relative radial velocity. One can electronically measure the frequency difference and simultaneously calculate the objects distance and velocity. Figure 3 illustrates the concept of chirped radar.
There are several approaches to FMCW LiDAR. In the simplest design, one can chirp-modulate the intensity of the beam of light that illuminates the target. This frequency is subject to the same laws (e.g. Doppler effect) as the carrier frequency in FMCW radar. The returned light is detected by a photodetector, which recovers modulation frequency. The output is amplified and mixed in with the local oscillator allowing the measurement of frequency shift. From the shift, one can calculate the distance and speed of the target.
FMCW LiDAR has some limitations. Compared to a ToF LiDAR, it requires more computational power and, therefore, it is slower in generating a full 3D surround view. In addition, the accuracy of the measurements is very sensitive to linearity of the chirp ramp.
Although designing a functional LiDAR is challenging, none of the specific challenges are insurmountable. As the research continues, we are getting closer to the time when the majority of cars driving off the assembly line will be fully autonomous.
 J. Wojtanowski, M. Zygmunt, M. Kaszczuk, Z. Mierczyk, and M. Muzal, “Comparison of 905 nm and 1550 nm semiconductor laser rangefinders’ performance deterioration due to adverse environmental conditions,” Opt.-Electron. Rev. 22(3), 183-190, (2014)