Our new planning dataset will allow a driverless vehicle to find its way through a dynamic environment full of obstacles and ever-changing circumstances – like a human driver does every day.
Autonomous vehicles are at a crossroads between perception and planning. In 2018, when the Machine Learning team at Motional examined the competitive landscape, most of the attention was focused on perception, or the act of teaching AVs how to see cars, bicycles, pedestrians, and other objects in images and laser scans. By teaching autonomous vehicles to identify objects in their vicinity, self-driving systems can paint a detailed picture and get a better understanding of their surroundings.
While others kept their valuable perception datasets proprietary, Motional was the first to make ours publicly available. This effort, called nuScenes, has been downloaded by more than 12,000 students, researchers, and developers and referenced in more than 600 academic publications since its release in 2019. More importantly, nuScenes™ ushered in an era in which almost all autonomous vehicle companies now share their datasets to advance the state of the art as a community, rather than going at it alone.
Notable Improvements in Perception
Through the combination of a standardized testing protocol and AV developers sharing their recipes for success, we have seen notable improvements in the task of perception. For example, academic research has shown that the leading perception method to identify bicycles under certain circumstances has increased performance fourfold over the last three years.
While perception endows an autonomous vehicle with the ability to see the world, planning helps the vehicle more safely navigate it. Now that AVs can more accurately identify what is around them, Motional is setting our eyes on autonomous vehicle planning or pathfinding.
The goal in planning is to teach the vehicle how to find its way through a dynamic environment full of obstacles and ever-changing circumstances –like a human driver does every day. We call this project nuPlan™ and it will be available to the public starting in late 2021.
World's First Benchmark for AV Planning
nuPlan™ is the world's first benchmark for autonomous vehicle planning. It contains a large-scale machine learning dataset and a toolkit for measuring the performance of planning techniques – essentially a virtual driving test. It was conceived to help build self-driving systems that consistently deliver safe and system-compliant performance in complex, cluttered environments. The nuPlan dataset also models uncertain interaction with other traffic participants, such as those found in the real world.
While ML-based planning has been studied extensively, the lack of published datasets that provide a common framework for closed-loop evaluation has limited progress in this area. Motional’s goal is to fill this gap by providing an ML-based planning dataset, closed-loop evaluation and planning related metrics.
Scaling to All the Complexities Drivers Encounter
The nuPlan dataset allows scaling to all the complexities that drivers encounter in the real world, including the extreme edge-case scenarios that are only experienced once every 1,000 hours of driving. No humans are annotating this data; it is entirely done by machines, with nearly the same quality. In addition, the dataset is much bigger and contains around 500 million images and 100 million lidar scans.
To put this in perspective, the average American spends 52 minutes a day driving. Since the nuScenes dataset was focused on highly curated data with high-quality annotations, it only comprised five hours of driving data. By comparison, the nuPlan dataset delivers 1,500 hours, which corresponds to 4.7 years of average driving.
Why is more data needed? Consider the diverse environment in which most people drive. Downtown Boston, residential areas in Pittsburgh, startup hubs in Singapore, and the Las Vegas Strip – all areas in which Motional is testing robotaxis – each have unique driving situations.
These range from different traffic-flow patterns (as in Singapore, with its left-hand traffic) to roadway signage and signals to various types of intersections, terrain, and other characteristics. Even within the same city, our cars encounter different driving situations every day because traffic densities, road layouts, topography, and traffic laws vary, and each area has its own unique rules of the road.
Closed Loop versus Open Loop
The other significant advantage of nuPlan™ is its closed-loop testing capability. This is an improvement over open-loop testing used in existing benchmarks.
With an open-loop system, the input is independent of the system's response, regardless of the system's behavior. Open-loop is sometimes called imitation learning, since the system simply checks that the planned route is similar to the one the driver took.
In closed-loop evaluation, the planned route is used to control the vehicle. The vehicle may deviate from the original route that the driver took. Other drivers will then react accordingly.
Among other things, our closed-loop driving tests how much distance a car keeps when overtaking cyclists, whether passengers are likely to get car sick due to high acceleration in turns, and how we slow down when approaching jaywalking pedestrians. It considers planning metrics related to these scenarios as well as traffic rules, vehicle dynamics, goal achievement – the same way an experienced and safe human driver would.
The Final Frontier in Autonomous Driving
With nuScenes™, we moved the needle in perception by helping driverless vehicles better view the world and other road users around them, and shared this knowledge with the world to advance autonomous technology. With the introduction of the nuPlan dataset, we hope that by providing a large-scale dataset and common benchmark, we will now pave a path towards progress in planning, which is perhaps one of the final frontiers in autonomous driving.