Driving along The Strip in Las Vegas is unlike driving anyplace else in the world. It’s block after block of iconic hotels and casinos, brightly lit billboards and marquees, and sidewalks filled with crowds of people from all walks of life.
Visitors come from around the world to experience all the lights and sights that make Las Vegas famous. But for drivers, the sensory spectacle can be distracting, especially when trying to safely navigate the busy boulevard.
Not for Motional’s IONIQ 5 robotaxis, however. The all-electric autonomous vehicle’s (AV) systems are trained to focus solely on safely navigating through and around all the cars, pedestrians, and other objects within its driving environment, while ignoring everything else that makes Vegas a memorable experience.
“The robotaxis don’t care about which casino has a buffet special, or which singer is coming to town,” said Arren Tuazon, a test engineer with Motional. “The vehicle only cares about what’s happening on the road and sidewalks around it.”
With road accidents rising, and distracted driving a leading cause of crashes, Motional sees robotaxis as a way to provide alternative mobility options to tourists and residents while making our streets safer.
According to the National Highway Safety Administration, more than 19,000 people died in vehicle-related accidents during the first six months of 2023. While down from the 2022 peak, that figure is still a 27 percent increase since 2013. Distracted driving was a factor in almost 10 percent of fatal accidents in 2021, according to federal statistics.
“When we drive, as humans, there are instances where we are distracted,” said Sourabh Vora, the director of perception at Motional. “All the things happening with the car, with our family inside the car, with what’s happening on the side of the road, those are all distractions that can lead to accidents. Meanwhile, the AV remains alert at all times.”
AVs use an interconnected network of high-tech sensors, powerful processors, and integrated controllers to replicate the actions and behaviors of a human driver. The part of the system that identifies objects around the vehicle is called Perception. It takes data from the sensor suite – which consists of cameras, radars, and lidars, and even microphones – and creates a 360-degree, 3D view around the vehicle. Onboard processors use advanced artificial intelligence modeling to detect and classify objects in the driving environment, such as trucks, cars, cyclists, and construction cones. [That information is also used to inform the vehicle’s Prediction and Planning functions, as we explain in more detail in our DriverlessEd educational series.]
Training the Perception system using machine learning principles requires a large, diverse collection of data; rather than seeing the same car over and over again, AV modeling needs to see all different types of cars and trucks of various shapes and sizes. Motional tests its AVs in five cities on two continents, generating large and diverse data samples.
“We have lots and lots of data we use to train our modeling,” said Vora.
The more different, unique, or odd scenarios the vehicles experience, the better for the overall system. Although Motional also tests its vehicles in Boston, Pittsburgh, Los Angeles, and Singapore, the uniqueness of Las Vegas strengthens the diversity of the testing data.
“All sorts of crazy things happen in Vegas,” said Vora. “That’s why it’s really nice to test here.”
KNOWING THE DIFFERENCE
Anyone driving down The Strip is bombarded with giant video billboards advertising upcoming concerts, dining deals, and a host of other attractions. Exotic cars merge in and out of traffic alongside mobile billboards, delivery trucks, and three-wheel motorcycles. There’s a lot competing for one's attention. But the Motional robotaxi can ignore it all.
“The vehicle’s perception system only classifies objects within the driveable area,” said Tuazon. “If it doesn't negatively impact the AV’s ability to provide a safe and comfortable ride, it doesn't care about it.”
If a high-res image of a truck appears on one of the giant roadside screens, Motional vehicles aren’t tricked into thinking there’s a truck up in the air.
“Our perception system isn’t just memorizing the image of a car. It has a notion of what the shape of the vehicle is like, what are the textures, the colors,” Vora said. “Because it’s seen hundreds of thousands of cars in different colors and shapes and forms, it’s learned what a regular vehicle looks like.”
Tuazon said the same holds true for large ads featuring people or faces.
“A flat screen with a person will not show up as a human, it will show up in the system as a flat screen,” he said.
The cameras, radar, and lidar all work together to verify what the other sensors are seeing and create a unified map of the driving environment. The system is also trained to distinguish traffic lights from the swirl of colors shining out from all the signs and screens along The Strip.
“If we were just using image sensors that could be problematic,” Vora said. “Fusing data from multiple sensors helps us distinguish and ignore those billboards.”
BUILDING UP PERFORMANCE
No other road in America is like The Strip in Las Vegas. Safely navigating that roadway can be difficult for any human driver, especially if they’re one of the 40 million annual visitors to the region. But Motional’s robotaxis are the ultimate safe drivers, undistracted by the glitz and glamor that surrounds them. In fact, the busy scenes help make the technology even smarter.