Keeping Focus: Motional’s Robotaxis Block Out Las Vegas Distractions
Motional’s all-electric IONIQ 5 robotaxis are trained to ignore all the lights and sights that make Las Vegas a memorable experience. and instead focus solely on safely navigating through the complex driving environment.
Technically Speaking: Second-Stage Vision Adds Needed Context to Unique Scenarios
Motional has developed a Second-Stage Vision Network that uses machine learning principles to add important context to our object classifications -- additional fine-grain classification then flows downstream improving our perception, prediction, planning, and control substacks.
Technically Speaking: Improving Multi-task Agent Behavior Prediction
Motional's PredictNet approach to prediction uses machine learning principles and a multi-task learning architecture to more accurately predict the future behaviors of surrounding agents.
Motional Releases Fourth Annual Consumer Mobility Report, Looking at the Road to Autonomous Vehicle Adoption, Headwinds, and More
The report takes a deep dive into the public perception and understanding of autonomous vehicle (AV) technology, including the headwinds AV companies face from the public, generational perceptions, and factors driving adoption.
Rethinking the Role of Radars as Robotaxis Mature
As AV technology advances, and the global supply chain responds to industry demand, radars could emerge as the central sensors for robotaxis, says Motional's Chief Technology Officer.
DriverlessEd Chapter 7: Outside Your Ride
When developing autonomous vehicles, nothing is more important than safety. The safety of those driving, walking, or biking near our robotaxis is just as important as the safety of the person inside the vehicle. Learn how our vehicles respond to their environment in Chapter 7 of #DriverlessEd.
Technically Speaking: How Continuous Fuzzing Secures Software While Increasing Developer Productivity
Motional uses continuous fuzzing to make sure that our software is as safe and secure as possible before deploying it – or if there is a glitch, that the system can handle it gracefully.
Technically Speaking: Improving AV Perception Through Transformative Machine Learning
Transformer Neural Networks are receiving increased attention about how they can improve AI-driven technology. Our latest Technically Speaking blog explores how Motional has been using Transformers to make our perception function better.
Technically Speaking: Using Machine Learning to Map Roadways Faster
Motional's latest Technically Speaking blog explains how we're using machine learning to speed up the process of mapping public roadways prior to launching commercial passenger service.
Technically Speaking: Closing The Loop To Travel Back And Help AVs Plan Better
Motional's latest Technically Speaking blog focuses on Planning, and how using closed-loop training will help refine the modeling AVs use to create a safe path forward quicker.
Motional Walks Transportation Planners Through Progress on AVs
Motional President and CEO Karl Iagnemma, Chief Technology Officer Laura Major, and others told NACTO attendees about the company’s driverless technology, approach to safety and accessibility, and the IONIQ 5 robotaxi.
Motional Expands Autonomous Testing to San Diego
By testing our AV technology in multiple, unique cities we can ensure Motional's driverless vehicles can adapt quickly to new driving environments.
A Path Forward: Using AI to Improve Remote Vehicle Assistance for AVs
As Motional’s robotaxis drive more, our vehicle assistance system will use machine learning principles to become smarter and require less human intervention over time.
Polar Stream: Simultaneous Object Detection and Semantic Segmentation Algorithm for Streaming Lidar
Motional’s research has unlocked an approach to streaming object detection that reduces latency while increasing accuracy, giving AVs even better data needed to make safe decisions.