(This was originally published on June 22, 2021, and was updated on Dec. 10, 2021 to reflect the initial release of nuPlan.)
Our new open-source planning dataset will allow our industry and researchers to better understand how a driverless vehicle can find its way through a dynamic environment full of obstacles and ever-changing circumstances – like a human driver does every day.
Autonomous vehicles are at a crossroads between perception and planning. In 2018, when the Machine Learning team at Motional examined the competitive landscape, most of the attention was focused on perception, or the act of teaching AVs how to see cars, bicycles, pedestrians, and other objects in images and laser scans. By teaching autonomous vehicles to identify objects in their vicinity, self-driving systems can paint a detailed picture and get a better understanding of their surroundings.
While others kept their valuable perception datasets proprietary, Motional was the first to make ours publicly available. This effort, called nuScenes, has been downloaded by more than 16,000 students, researchers, and developers and referenced in more than 1,000 academic publications since its release in 2019. More importantly, nuScenes ushered in an era in which almost all autonomous vehicle companies now share their datasets to advance the state of the art as a community, rather than going at it alone.
World's First Benchmark for AV Planning
Motional today launched the initial version of nuPlan, the world’s first benchmark for autonomous vehicle planning. We are again making this one-of-a-kind dataset available to the public with the goal of improving the way we teach vehicles to find their way through a dynamic environment full of obstacles and ever-changing circumstances – like a human driver does every day.
While ML-based planning has been studied extensively, the lack of published datasets that provide a common framework for closed-loop evaluation has limited progress in this area. Motional believes that nuPlan’s goal is to fill this gap by providing an ML-based planning dataset, closed-loop evaluation, and planning related metrics.
nuPlan contains a large-scale machine learning dataset and a toolkit for measuring the performance of planning techniques – essentially a virtual driving test. The dataset contains 1,500 hours of driving data and was collected across four different cities: Boston, Pittsburgh, Las Vegas, and Singapore. This makes nuPlan the world's largest public dataset for autonomous vehicle prediction and planning.
Notable Improvements in Perception
Following the release of nuScenes, we have seen notable improvements in the task of perception through the combination of a standardized testing protocol and AV developers sharing their recipes for success. For example, academic research has shown that the leading perception method to identify bicycles under certain circumstances has increased performance fourfold over the last three years.
While perception endows an autonomous vehicle with the ability to see the world, planning helps the vehicle more safely navigate it. Now that AVs can more accurately identify what is around them, Motional is setting our eyes on autonomous vehicle planning or pathfinding.
Scaling to All the Complexities Drivers Encounter
nuPlan allows scaling to all the complexities that drivers encounter in the real world, including the extreme edge-case scenarios that are only experienced once every 1,000 hours of driving. No humans are annotating this data; it is entirely done by machine learning, with nearly the same quality.
In addition, the dataset is much bigger, containing around 500 million images and 100 million lidar scans.To put this in perspective, the average American spends 52 minutes a day driving. Since nuScenes was focused on highly curated data with high-quality annotations, it only comprised five hours of driving data. By comparison, the nuPlan dataset delivers 1,500 hours, which corresponds to 4.7 years of average driving.
Why is more data needed? Consider the diverse environment in which most people drive. Downtown Boston, residential areas in Pittsburgh, startup hubs in Singapore, and the Las Vegas Strip – all areas in which Motional is testing robotaxis – each have unique driving situations. These range from different traffic-flow patterns (as in Singapore, with its left-hand traffic) to roadway signage and signals to various types of intersections, terrain, and other characteristics. Even within the same city, our cars encounter different driving situations every day because traffic densities, road layouts, topography, and traffic laws vary, and each area has its own unique rules of the road.
Closed Loop versus Open Loop
The other significant advantage of nuPlan is its closed-loop testing capability. This is an improvement over open-loop testing used in existing benchmarks. With an open-loop system, the input is independent of the system's response, regardless of the system's behavior. Open-loop is sometimes called imitation learning, since the system simply checks that the planned route is similar to the one the driver took. In closed-loop evaluation, the planned route is used to control the vehicle. The vehicle may deviate from the original route that the driver took. Other drivers will then react accordingly.
Among other things, our closed-loop driving tests how much distance a car keeps when overtaking cyclists, whether passengers are likely to get car sick due to high acceleration in turns, and how we slow down when approaching jaywalking pedestrians. It considers planning metrics related to these scenarios as well as traffic rules, vehicle dynamics, goal achievement – the same way an experienced and safe human driver would.
The Final Frontier in Autonomous Driving
With nuScenes, we moved the needle in perception by helping driverless vehicles better view the world and other road users around them, and shared this knowledge with the world to advance autonomous technology. With the introduction of nuPlan, we hope that by providing a large-scale dataset and common benchmark, we will now pave a path towards progress in planning, which is perhaps one of the final frontiers in autonomous driving.
We invite you to follow us on social media @motionaldrive to learn more about how we’re making driverless technology safe, reliable, and accessible. You can also search our engineering job openings on Motional.com/careers.