
This project combines learned simulation with adaptive control to enable quadrupedal robots to traverse muddy and deformable terrain. We develop contact models for mud and soil interaction and study how modified limb designs, affect locomotion performance on compliant ground. The work builds on our terramechanics expertise and uses model-based reinforcement learning to develop controllers and new simulators for mud interactions.
This project investigates multi-robot motion planning and coordination for teams of legged robots operating in shared environments. We develop algorithms that enable multiple agents to safely plan and execute tasks together, while reasoning about inter-agent interactions and shared contact constraints.

This project designs and models legged robots that use magnetic adhesion to climb steel structures such as bridges, storage tanks, and ship hulls for automated inspection. Building on our past work of micro-spine enabled climbing and wheeled magnetic climbers, we are designing new wall climbing robots. Unlike wheeled magnetic climbers, our legged approach can traverse obstacles like welds, rivets, and surface irregularities. The work spans mechanism design, magnetic adhesion modeling, gait planning for vertical and inverted surfaces, and force control to maintain attachment while walking.
"Unmanned surface vessels (USVs) hold promise for disaster relief, environmental monitoring, and security, but wind, waves, and currents make them difficult to model and control. This project investigates combining moving-horizon dynamics estimation with robust nonlinear model predictive control to enable USVs to adapt to uncertain and mismatched marine environments, exploring how best to couple real-time dynamics identification with disturbance-aware control for reliable autonomous navigation."

Saltation matrices describe the instantaneous state and uncertainty updates that occur during hybrid events such as a foot striking the ground. This project develops this framework and new applications of this mathematical framework for estimation, control, and optimization of hybrid dynamical systems, including convergent iLQR for safe trajectory planning, hybrid event shaping to stabilize periodic orbits, and improved Kalman filtering during contact transitions.

How simple can a walking robot be? This project explores the extreme limits of mechanical simplicity in bipedal locomotion. Mugatu, the first steerable bipedal robot with only a single actuator and two rigid bodies, established five design rules for passive dynamic walking. Zippy, at just 1.5 inches tall, scaled these principles down to become the world's smallest self-contained bipedal robot, walking at 10 leg-lengths per second, turning, skipping, and climbing small steps.

Led by Dr. Aja M. Carter and collaborating with paleontologists, this project designs small quadrupedal robots inspired by extinct animals to study how spinal morphology affects locomotion. By building robots with individually controllable spine degrees of freedom, we investigate how animals transitioned from sprawling to erect postures and how spinal flexibility contributes to agility. We develop learned RL-based controllers in simulation and validate designs on physical platforms.
Quad-SDK is an open-source software stack for agile quadruped robots, combining planning, control, simulation, and visualization tools in one modular ROS-based framework. Designed for both rapid research development and real-world deployment, it supports dynamic locomotion behaviors such as long-horizon navigation, trotting, and leaping across multiple robot platforms.
Penguins achieve remarkable stability while walking despite their seemingly ungainly proportions. In collaboration with researchers at NYU, this project takes inspiration from penguin biomechanics to design a bipedal robot that achieves passive self-stabilization through careful body mass distribution and foot geometry rather than complex feedback control. By studying how penguins use their body shape and waddling gait to remain upright, we develop design principles for robust, energy-efficient bipedal walkers.

Navigating through dense vegetation and vine-like entanglements is one of the most challenging problems for mobile robots. This project develops proprioceptive sensing and reactive control strategies that allow quadrupedal robots to detect, respond to, and push through tangled vegetation without getting stuck. We combine leg force feedback with learned behaviors to enable robust traversal of cluttered natural environments.
In this multi-university research initiative, we study biological muscles to understand their low-level reflex control and mechanical response properties. We develop simulated models that compare the stability characteristics of muscle-like actuation to traditional PID control, and are building small robot limbs driven by artificial muscle controllers. The ultimate goal is a crocodile-inspired robot whose actuation captures the compliance and reflexive behaviors of biological muscle.
New construction vehicles and excavators are essentially limbed robots operating in contact-rich environments. This project develops adaptive learning control for construction machinery, integrating perception and multi-agent coordination for tasks like excavation, loco-manipulation, and material handling. We study how techniques from legged locomotion, particularly handling changing contact conditions and terrain uncertainty, transfer to construction applications.
Foundational work that shaped our current research directions.
We studied how appendages like tails and articulated spines enable extreme terrain traversal by quadruped robots. Inspired by cheetahs and lizards, this work demonstrated that proprioception and tail control enable robots to navigate rough terrain that would otherwise be impassable, and established design principles for inertial reorientation appendages.
Physical models of how robots interact with granular terrain (soil, sand, and regolith) including terramechanics models for high slip angle and skid, and soil displacement models for wheel-based trenching. Applications ranged from nonprehensile terrain manipulation to predicting rover mobility on lunar and Martian surfaces.
We enabled autonomous navigation through unstructured outdoor environments with rough terrain and unknown physical properties. The robot learned from driving experience in real and simulated environments, then performed robust decision making to find safe routes over difficult terrain.
Robotic systems for autonomous soil sampling and environmental analysis, charting a path from manual field science to autonomous data collection for monitoring soil quality, plant health, and environmental conditions at scale.