Skip to main content

Introducing nuReality™: Motional’s New Open Source Virtual Reality Environments

November 30, 2021 nuScenes

Motional is making its virtual reality environments open source to help accelerate research on AV-pedestrian interaction.

In 2019, Motional made nuScenes™, our large-scale autonomous driving dataset, open source to help spur industry-wide collaboration and further research that will bring safe driverless vehicles to our streets and communities faster. Since its original release, nuScenes™ has been downloaded by more than 12,000 academics and researchers, and referenced in over 600 publications. nuScenes™ also pioneered a movement of safety-focused data sharing across the industry.

Today, we’re excited to announce we’re continuing the nuScenes™ legacy by making nuReality™ - which consists of custom-built Virtual Reality (VR) environments used to study the interaction between autonomous vehicles (AVs) and pedestrians - publicly available for the research community.

A key challenge to widespread acceptance and adoption of driverless vehicles is clear, safe, and effective communication between AVs and other road users. When a pedestrian or cyclist crosses the street and a human driver isn’t behind the wheel to signal recognition and intention using, say, hand gestures or facial expressions, how will road users know the vehicle has acknowledged them and will yield to let them cross? And this is only the first of many AV and human interaction research challenges we envision would benefit from nuReality™ today.

nuReality™ is a proprietary set of VR experiences that we’re using in our Expressive Robotics research to learn how to train robots to respond in their environment similar to a person. Our goal with making nuReality™ open source is to advance human-machine communication in the AV space.

Visit the nuReality website here

Testing Pedestrian Crossing Scenarios

We developed nuReality™ to understand how expressive behaviors by AVs – flashing lights and deliberate sounds such as exaggerated braking – can aid in human-machine communication with pedestrians and signal a driverless vehicle’s intentions. We’ve found that using expressive behaviors to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles and feel more confident in their decisions.

nuReality

A still image from one of the nuReality animated scenarios, that illustrates the realistic VR streetscape setting

Through our Expressive Robotics research, we realized that the benefits of using VR technology lay not only in experimental control, reproducibility, and ecological validity, but also in practicality and accessibility. Since testing pedestrian-crossing scenarios in real life would be complex and potentially unsafe, we collaborated with animation studio CHRLX to create a bespoke VR environment.

There are two vehicles in the animation files: a conventional human-driven vehicle and a driverless vehicle. The animated AV includes side mirror and roof mounted lidar sensors and no visible occupants, while the human driven-model includes a driver looking ahead and remaining motionless during the interaction.

Vehicle Animation Scenarios

We created 10 vehicle animation scenarios, some of which include:

  • A human driver stopping at an intersection
  • An AV stopping at an intersection
  • A human driver not stopping at an intersection
  • An AV not stopping at an intersection
  • An AV using expressive behavior such as a light bar or sounds to signal its intentions

We wanted to make this virtual environment as convincing as possible for the participants and included numerous visual (road and building texturing, parked cars, swaying tree limbs) and audible (birds chirping, cars driving by, people talking) elements. These details enhance place illusion and allow users to sense spatial presence within the virtual environment – giving the impression that they’re standing on an actual street.

 

In this clip from nuReality, the approaching Motional robotaxi's nose dips to signal that the vehicle is stopping.

 

In this clip from nuReality, the approaching Motional robotaxi uses an LED strip in the front windshield to indicate that the vehicle is stopping.

This VR immersion experience was so convincing that it provoked several participants to elicit instinctively angry reactions including swearing and making gestures toward vehicles that didn’t stop for them. Given the effectiveness of the VR immersion experience and the value in studying AV-pedestrian interaction, we want to share it with the research community to further the research and, ultimately, lead to AVs that are better able to integrate with their communities.

Motional’s collaboration with researchers and the academic community on the nuScenes dataset has proven invaluable in moving the needle on driverless technology. We hope to do the same with nuReality™.

The appeal and value of the nuReality™ files is they can be easily adapted and used in a variety of applications so that others can expand upon our work in Expressive Robotics. Maya and Unreal Engine animation software are two incredibly accessible and powerful tools that allow researchers to push the boundaries of creativity and Expressive Robotics testing and application.

By making nuReality™ open source, we hope these VR files will accelerate research into pedestrian-AV interactions and Expressive Robotics. It’s like the old saying: If you want to go fast, go alone. But, if you want to go far, go together.

We at Motional are going the distance in making driverless vehicles a safe, reliable, and accessible reality.

Interested in finding out more about nuReality? Visit nuReality.org for more information on Motional's open source datasets. For more information on how Motional is using Expressive Robotics, visit this blog.