Skip to main content
An off-road automous buggy.

Taking Autonomous Driving Off-Road

CMU researchers create self-driving system useful for mining, search and rescue, exploration

Media Inquiries
Name
Aaron Aupperlee
Title
School of Computer Science

Trees, vegetation, rocks, unpredictable terrain and the lack of clearly defined roads — or roads at all — won’t stop an autonomous, off-road vehicle developed by researchers at Carnegie Mellon University’s Robotics Institute(opens in new window).

As self-driving taxis, trucks and other vehicles pop up on city streets and highways, challenges remain for autonomous vehicles designed for mining, search and rescue, wildfire management, exploration, defense, and other uses that may take them into unpredictable, off-road terrain. Autonomous vehicles rely on defined maps, traffic laws, signs and street markers to navigate cities, suburbs and the interstates. But that information is not available to the autonomous all-terrain vehicle speeding across an open field or desert. 

Successfully navigating off-road terrain requires a vehicle that can interpret its surroundings in real time. Most current systems require months of human-led data labeling, design and field testing. The TartanDriver team at the AirLab, an RI research lab that specializes in state-of-the-art autonomy, created a new self-supervised autonomy stack that enables vehicles to safely traverse complex terrains with speed and accuracy without the need for time-consuming human intervention. 

“By combining the power of the foundation models and the flexibility of self-supervision, the AirLab at CMU is pushing the limits of autonomous driving in challenging terrains,” said Wenshan Wang(opens in new window), a systems scientist with the AirLab and member of the TartanDriver team. The foundation models can recognize natural features such as tall grasses and trees without the researchers having to label everything themselves, making the data collection process far more efficient. 

The team focused on three key principles: self-supervision, multimodality and uncertainty awareness. They equipped their ATV with lidar sensors to detect objects, cameras, inertial measurement units (IMUs), shock travel sensors, wheel encoders and more. The resulting system embraces self-supervision to navigate by itself with inverse reinforcement learning from multi-modal perception data to learn where to go and balance risk and performance. 

“Since there is no prior map or GPS, our system relies on SLAM (Simultaneous Localization and Mapping) to track position and build a local understanding of the environment in real-time,” said Micah Nye(opens in new window), master of science in robotics (MSR) student and TartanDriver team member. “This enables consistent perception across terrains that have different visual environments.” 

The team implemented their system on a side-by-side ATV and tested its abilities in several complex terrains, including grass fields, rocky paths and varying inclines. The vehicle was able to successfully navigate the environments and adjust to changes without human correction. Next, the AirLab is pushing for success in more challenging conditions by adding new sensors to their payload such as thermal cameras. 

“Thermal cameras detect heat instead of light, allowing us to see through smoke and in other visually degraded conditions,” said Yifei Liu(opens in new window), MSR student and fellow team member on TartanDriver.

Beyond the ATV, the team has also begun testing the autonomy stack on quadrupeds and urban motorized wheelchairs, expanding its capabilities across new platforms and environments. 

More information about this project is available on the AirLab’s website(opens in new window).

— Related Content —