Skip to main content

Technically Speaking: Motional’s Imaging Radar Architecture Paves the Road for Major Improvements

August 13, 2024 Chulong Chen, Senior Principal Engineer, Team Lead, Autonomy Technically Speaking

Lidar has long been a cornerstone sensing modality in autonomous vehicles (AVs). As the primary source of high resolution information across three-dimensions, lidar has been integral to achieving safe and comfortable SAE Level 4 self-driving capabilities. 

However, even with per unit costs dropping,  lidar still remains one of the most expensive components of the AV hardware stack, driving the need for alternatives. By rethinking system architecture and using machine learning (ML) to analyze and process previously discarded low-level radar data, Motional is working on dramatically enhancing radar performance to the point that it rivals the point clouds produced by lidar. This development  opens the door to leveraging radars as a primary sensing modality for Level 4 AVs, with the potential to significantly reduce AV hardware costs while maintaining system safety and performance.

SENSING A SHIFT  

Motional's current AV platform, based on the all-electric Hyundai IONIQ 5, uses a suite of sensors, including short and long-range lidars, mid-range radars, and multiple cameras. This sensor suite gives the AV a 360-degree view of the driving environment, allowing it to detect objects as far away as 250 meters. This multi-modal sensor configuration provides critical redundancy and overlap across variable driving conditions and scenarios. 

However, long-term commercial success of AVs will require reducing AV hardware costs and improving reliability, without sacrificing safety and ride quality. Lidar is by far the most expensive sensor on AVs and sensor optimization will be critical in realizing the desired cost savings to make AVs viable for scaled commercial deployments. Motional believes lower cost sensors such as radars can assume a more prominent role in future sensor architectures, even potentially becoming the primary sensing modality for AVs.

In pursuit of this mission, Motional believes we can unlock the full potential of radar sensors, via a data and machine learning centric end-to-end sensing and perception approach. By engineering sophisticated hardware, data and ML architecture, we can empower our AVs to interpret radar signals with unparalleled precision, facilitating robust decision-making in real-time when fused with vision sensor data. This approach effectively reduces the reliance on lidar, paving the way for scalable cost efficiencies without compromising our high standard in safety or functionality.

E2E Radar Sensing & Perception for Autonomous Vehicles

Millimeter Wave Radar, or mmWave Radar, distinguishes itself from lidar in several crucial capabilities. For one, it detects objects at distances well beyond 200 meters and reliably identifies smaller objects such as obstacles (such as cinder blocks or tires on the road) and pedestrians that might otherwise fall between lidar scanning lines. Additionally, radar directly measures an object's velocity through Doppler measurement and maintains performance even in adverse weather conditions. These features are critical for safety and defining the operational design domain (ODD) of an AV system. Radar has also long been used in military, industrial, and commercial applications, and has a demonstrated durability.

With all those advantages, what has prevented conventional radar from elevating into a primary perception modality? The main limitation, traditionally, has been a sparse point cloud that hampered downstream tasks; specifically perception, prediction, and planning. This is due to a legacy architecture constrained by processing bottlenecks that prioritize the strongest signals while setting aside semantic information in low-level data. By rethinking this architecture, we have the potential to unlock radar’s full perception capabilities.

Centralized Low-Level Radar Architecture

Part of overcoming this limitation is developing a novel new architecture that replaces traditional Digital Signal Processors (DSPs) with a centralized low-level radar architecture. Traditional architectures often lose much of this data due to bottlenecks in embedded DSPs and Ethernet-based datalinks, as shown below.

Figure 1 features a Conventional Radar Architecture in an L2/3 ADAS System and illustrates the inherent limitations in conventional radar systems. These systems are constrained by computing capacities and bandwidth of embedded hardware, leading to severe information bottlenecks. A typical radar unit in this configuration produces a limited output, generating merely a few hundred detections per frame, equating to a few thousand detections per second.

Figure 2 features Motional’s Next Generation Imaging Radar Architecture: Unlike conventional systems, this innovative design involves the AV's high-Performance Computer (HPC) processing low-level radar data in tandem. The result is the generation of high-fidelity radar imagery, achieving an impressive equivalent of over 20 million points per second.

This new architecture significantly enhances the system's capability to interpret complex environments, and facilitates a comprehensive understanding of the vehicle's surroundings with minimal latency. By removing these bottlenecks, the new system allows the autonomous vehicle (AV) hardware and software stack to fully leverage the rich data embedded in the low-level radar output. It uses a constellation of mmWave Radar frontends to generate data at multi-Gbps rates, capturing a wealth of information. 

This revolutionary approach to radar and high-performance computing (HPC) architecture marks a significant step in enhancing the radar's capability to contribute to the overall functionality and safety of AVs.

End-to-End Optimized Advanced Radar Imaging

Motional's approach to radar imaging is significantly different from conventional radar DSP pipelines. Leveraging the power of HPC, Motional employs an ML module to aggregate all low-level radar data in a multi-channel and multi-scan (MCMS). This technique produces high-fidelity, low-latency radar images at a rate of 20Hz, achieving a level of detail comparable to a lidar system generating 2 million points per second.

The architecture previously described constitutes a fully Software Defined Radar (SDR) system combined with an end-to-end ML pipeline, enabling ongoing innovation throughout the entire lifecycle of the AV platform. However, to fuel this vision, the system needs datasets to learn on.

Motional has curated a groundbreaking multi-modality dataset that integrates low-level radar output with camera and lidar data. The Petabyte of sensor data is used towards training and refining ML algorithms. It is further enriched with detailed annotations generated through a scalable pipeline that employs both automated labeling and manual methods. This comprehensive dataset empowers Motional to iteratively enhance the sensing and perception pipeline, thereby accelerating the pace at which our vehicles learn.

Radar Perception Model Consuming Low-Level Radar Image

Motional's nuScene datasets over the years have had a significant influence on both industrial and academic advancements in perception technologies. However, many existing model architectures and methods are primarily optimized for lidar and traditional radar systems. Recognizing the need to fulfill the potential of next-generation radar systems, Motional developed end-to-end imaging-perception deep learning models [1] that showcases the feasibility of our approach by training a radar perception model directly from the radar's raw ADC (Analog-to-Digital Converter) output.

This advancement is evidenced by a landmark proof point: our end-to-end radar sensing-perception pipeline attained a three times Average Precision (AP) in Vulnerable Road User (VRU) detection tasks on a production-quality dataset. This success marks a significant milestone, overcoming what was once considered a major challenge in radar-based perception systems.

Figure: The left image shows a row of vehicles parked along a curb identified by an AV’s perception system with bounding boxes. The center image shows how the vehicle’s current radar system does not smoothly pick up the street curb behind some of the vehicles. The right image shows how Motional’s novel approach to radar provides a more complete, detailed view of the roadway. 

Conclusion 

Creating a scalable, cost-optimized AV requires creative solutions to overcome technological limitations.  

Motional's advancements in radar technology –the creation of a centralized low-level radar architecture, an extensive low-level Radar dataset, and the innovation in end-to-end optimized radar imaging – collectively redefine what we understand is possible with radar perception. 

As we look to the future, the integration of camera and radar systems is poised to be a cornerstone of our next platform, enhancing the reliability and versatility of the sensory framework. This synergy is expected to significantly boost perception capabilities, contributing to the scalability and success of Motional’s future autonomous vehicles.