True-solid-state – The future of automotive LiDARs


0

It was late 2012 when the attention of Professor Dirk Van Dijck and of serial entrepreneur Johan Van den Bossche fell on a photo of a Google Autonomous Vehicle. The first prototype of the Mountain View giant had just been disclosed making the first page in many media.
On the roof of the modified Toyota Prius, a spinning Velodyne LiDAR enabled the car to see the surroundings in 3D, providing the data necessary for the localization and positioning algorithms, and for the first layer of its perception stack. The two men began imagining how that vehicle would evolve under the pressure of the automotive industry’s intense competition.
One of the challenges in transportation has always been to move people and things minimizing the number of moving parts. This concept is also clear to my 8-year-old who wishes for tele-transport to replace cars: less noise, less pollution, and of course faster. Mechanical moving systems are made of many components requiring energy to accelerate the mass, and to win friction and momentum with consequent loss of efficiency and reliability resulting in increases of the overall cost.
In the evolution of automotive systems, the first elements to be optimized have often been moving parts. We have many examples of this, like adaptive headlamps, designed to better illuminate turns and to avoid glaring with complex systems of shutter and gimbals, now replaced by thousands of solid-state LEDs.

In that first Google AV, the complete perception stack was relying on a moving element: a LiDAR spinning to scan the surrounding.
This same weakness in the first AV LiDAR was identified by many other innovators and it sparked a dramatic search for ways to improve LiDAR and the birth of dozens of startups. Many saw the opportunity, and many took the path of applying incremental innovations focused on the scanning principle, reducing masses, miniaturizing components (like with the introduction of MEMS (micro electro-mechanical system), while improving the performance.
The professor and the entrepreneur, both with numerous patents, technology start-ups and successes stories on their resume, took a different path, knowing that innovations with higher potential often come with dramatic decisions. If the weakness of the scanning is the movement, why to not eliminate the need for scanning completely? Why not to shoot thousands of laser beams in a single shot instead of using few rotating lasers? The idea of the true-solid-state LiDAR was born, together with the first LiDAR company developing it, XenomatiX. It was 2013.
True-solid-state LiDAR
The “solid-state” definition has been used to classify different types of LiDARs leading to some confusion also within the LiDAR community. Solid State can describe the type of laser source and detector, e.g., when semiconductors are used. MEMS-based LiDARs are often claimed to fall in this category despite relying on moving microelements for changing the laser direction. Does a solid-state system not imply the absence of moving parts?  Also, recent FMCW technology adopted by big players like Aurora, Intel, and Aeva pretends to be solid-state. Clearly this is not correct when a  scanning mechanism is still required for steering the laser wave. Some of the FMCW LiDAR using OPA (Optical Phase Array) have a very low technology readiness and still many technical aspects to resolve. Compared with the original Velodyne LiDAR, these technologies greatly reduce number of components, size, weight, and cost often with improved performances, but they do not change the paradigm of scanning the scene and do not bring the scalability and simplification to make LiDAR accessible for low-cost applications.
With the need to distinguish its technology, XenomatiX introduced the attributes “True” to identify solid-state LiDAR systems that meet the definition in all aspects: XenomatiX technology is true-solid-state because it is made with a semiconductor-based laser source and detector and because it is realized without scanning nor moving components, delivering simplicity and scalability required for reliable mass production.
More recently this approach has been adopted by an increasing number of players. Ibeo with the Digital LiDAR and Ouster with the Fully Solid State LiDAR both promise solutions soon ready for mass production. Apple is using this approach in the iPad LiDAR.
OEMs are interested in true-solid-state. A patent was assigned in 2017 to General Motor, describing modular LiDAR, without moving parts, integrated, and distributed around the vehicle with a 360° coverage.
More recently Great Wall Motors announced a partnership with Ibeo.
Marelli started a Joint Development Project with XenomatiX in 2020.
For Dirk Van Dijck and Johan Van den Bossche,  these facts confirm the vision they had since 2012.
Multi-beam
A peculiarity of true-solid-state is the illumination of the scene with thousands of beams, focusing coherent light in discrete spots that are measured simultaneously and converted in a point cloud. This multi-beam technology also differentiates from flash LiDARs. Like true-solid-state LiDAR, also traditional flash LiDAR does not use mechanical components to illuminate the scene stepwise.  Instead, they use lenses to spread the light all over the scene. Illuminating not only observed target but also points that are not measured, flash LiDARs lose the optical efficiency necessary to reach mid and long ranges. Flash LiDAR are largely used for applications that require maximum 20 to 50 meters range. Multi-beam approach solved this limitation concentrating the emitted photons only in the spot that would be measured, ensuring a high resolution, and reaching long ranges required at highway speeds. The only limit to reach farther distances is dictated by eye safety regulations. For applications that do not require a laser Class 1 classification, multi-beam can be effective at 500m and more.

Emitter – VCSEL
Multi-beam can be generated from millimeter size chip where thousands of VCSEL (Vertical Cavity Surface Emitting Laser) emitters are packaged in arrays.  VCSEL arrays emit laser beams perpendicular to the semiconductor surface with high optical power in short pulses without overheating and requiring limited electrical power. A 20,000 to 40,000 lasers VCSEL array requires a few watts, about the same consumption of a single automotive LED low-beam, and 5 times less than a halogen one.
VCSEL technology and performance improved exponentially in the last few years supported by the use in  opto-communication as signal generator over optical fibers; such advancements found an immediate use in the automotive and consumer LiDAR world.
Detectors
Detectors are the most critical components for LiDAR technology as they require high sensitivity and large dynamic range combined with high resolution. Spinning, mechanical, and solid-state LiDAR are mostly based on Avalanche Photo Diodes (APD) or Single Photon Avalanche Photo Diode (SPAD). Such sensors, once hit by light, produce high gain currents that can been measured and postprocessed to calculate directly or indirectly the time of flight of the laser pulses bouncing on a target and consequently revealing its distance.
In the context of LiDAR detectors, XenomatiX took a completely different approach. Automotive has very strict requirements to meet high volume production and long vehicle lifetime. Dirk Van Dijck and Johan Van den Bossche set the goal to design a LiDAR system using already proven and cost-effective technology. Standard camera CMOS have all characteristics relevant for automotive. The two men found a way to implement time-of-flight measurements in an innovative pixel design, creating an APS (active pixel sensor) based on CMOS. Together with a smart algorithm, the pixels count the photons, subtract the background noise, and calculate the distance to a target. A megapixel APS can capture 36,000 3D points per frame, more than double of a SPAD CMOS.
A first significant consequence is that the point cloud is captured in a global shutter mode, eliminating the risk of image distortion. Traditional mid and long range LiDARs need to rebuild the 3D image because they capture each point at a different moment. With a scanning approach, the image is not captured in the same moment but top-down or from left to right resulting in a rolling shutter mode. Rolling shutter could result in shape distortions of very fast targets, such as a vehicle on a highway. Rolling shutter is the effect that in a photo deform the blades of a helicopter resulting with a curved shape. This happens because while the sensor was capturing the center of the rotor, the tip of the blade was moving away. Opposite to rolling shutter, the global shutter captures the full image in the same instance. Rolling shutter and image distortion can cause errors to object detection and classification software and are typical for spinning and scanning LiDAR.
The second consequence of this revolutionary approach is inherent sensor fusion. When the APS CMOS is not used to collect a 3D point cloud, between lasers pulses, it can capture a 2D image, like a standard camera. The same pixel used to calculate the position of an object is also used to capture the image of such object. Camera and LiDAR are now fused together, physically, and inherently. To the relief of system and software developers, the APS CMOS allows to skip the heavy process of sensor fusion and eliminates any possibility of parallax error or blinds spots for 1 of the 2 types of data. Autonomous systems developers can now get both high-resolution images and point clouds from the same imager, heavily simplifying the hardware and software required to perceive the environment. With a frame rate of 25Hz and using 3 sensor heads such systems can capture up to 2.7 Mil of points per second distributed over a 180 degrees field of view, performances superior to the high-end spinning LiDAR.
Third advantage brought by APS CMOS is the robustness to interferences and other LiDAR signals. Each pixel detecting a laser and calculating the target distance is calibrated to recognize only the signal of a specific emitter and its unique position in space. If a pixel is illuminated by other LiDARs or signals from different directions, also comparing the information from the surrounding pixels, it can recognize the disturbance, and discard the measurement.
Multi-beam LiDARs require a minimal number of components. With two lenses, two chips on a PCBA, and a housing, these sensors present a bill-of-material comparable to a stereo-camera at a similar price range. With the same assembly lines and calibration tools already in place for stereo-cameras, the production can be easily scaled up. The use of existing chip technology and assembly concepts enabled XenomatiX to build and rely on a mature ecosystem of suppliers, and to project a unit cost in the one-hundred dollars range, making the LiDAR technology accessible not only to luxury vehicles.
4D AI
The APS CMOS used to compute the point cloud can also capture object reflectivity and 2D images.
The standard AI approaches for object detection and classification have been based either on shapes and colors analyzed from 2D camera images or lately on geometrical information captured by LiDAR or stereo cameras 3D point clouds. The most complex algorithms combine both approaches comparing and validating the results obtained by the two systems in what is called sensor fusion.

True-solid-state LiDAR enables instead to run AI algorithms based on 3D geometrical information plus the object surface reflectivity captured by such sensors. 3D-AI using pure geometrical information requires high resolution and precision to distinguish between very close object, e.g., counting the number of people in a crowded space. This is because the AI is trained to recognize objects’ shapes and dimensions, and when two objects overlap their shape become confused. Adding information relative to the surface properties enables the algorithms to be trained using features within a shape such as a face versus a torso and using these features to, for example, distinguish and count the number of people in a crowd. Artificial intelligence using geometry and reflectivity is called 4D AI. Because it relies on data captured by LiDARs, which is insensitive to the external lighting conditions, it proves equally effective during day and night.

Integrability/Modularity
But what are the practical applications of these benefits that are attracting the interest of automotive industry? Two prerequisites that cannot be ignored in the transportation world are style and efficiency driven by aerodynamic. The style has always been center stage for automobiles. Fuel-efficiency and aerodynamic are prominent since years. Not anymore relegated to sport-cars, this aspect is optimized across vehicle segments and especially in EVs to increase range and EPA ratings.
The AV or ADAS sensor stack has to be accepted by the established OEMs styling and aerodynamic departments. As such, sensor integration is an absolutely necessity. On this point, scanning systems exhibit some weaknesses. Such devices rely on expensive components and complex architectures that only become cost efficient if they can cover a large field of view. On the other hand, a large field of view requires broad visibility and elimination of any possible “dead angle”. Unfortunately, this is quite incompatible with the shapes of today’s vehicles – based on trends seen at auto shows, this appears to hold true also in the foreseeable future. Tesla adopted a solution hiding and integrating cameras all around the vehicles with style and aerodynamic driving the camera location, not vice versa. Why do we expect LiDAR to follow a different destiny in the OEM hierarchy?
Looking at the General Motor patent we count 10 LiDARs per side or 20 for the full 360 degrees. Considering a 2000$ target cost for a full LiDAR suite, the 100$ per sensor becomes a necessary enabler for higher level of autonomy. Such a tight limit can be achieved by multi-beam true-solid-state LiDARs owing to the entrepreneurial adventure started in 2012 by Dirk Van Dijck and Johan Van den Bossche.

 
References

US9831630B2 ; 2015-08-06 ; GM Global Technology Operations LLC; “Low cost small size LiDAR for automotive”

 
by Jacopo Alaimo, NA Sales and Business Development Manager, and Filip Geuens, CEO, at XenomatiX
.


Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win
Joseph

0 Comments

Your email address will not be published. Required fields are marked *