Living Edge Lab Year-End 2024 Message
It has been a cold January in Pittsburgh! At the Living Edge Lab (LEL), we used the winter break to reflect on all we accomplished last year. As we start another year, here’s an update on our 2024 work. In 2024, we continued our multi-pronged approach -- leading edge computing research supported by pioneering low-latency network infrastructure and industry engagement.
RESEARCH
Our research agenda stayed mostly focused on edge computing enabled lightweight autonomous drones (SteelEagle) and live learning during remote missions (Hawk). Beyond these two projects, we did interesting work using large language models (LLM) in edge computing when real-time response is important. We also continued work on accelerating wearable cognitive assistance (WCA) application development through the tinyHulk project.
SteelEagle, our autonomous drone project and largest research team, had a very productive year – rearchitecting and advancing the SteelEagle platform, adding some exciting new capabilities, pioneering autonomous drone performance benchmarks, and beginning to build the domain specific language that enables drone mission portability across different drone models.
Our first year of SteelEagle was capped by a demonstration in April showing autonomous tracking of a human target over a large area using a small, lightweight, inexpensive consumer drone. Our learnings from year one led us to a redesign of the platform to support our future needs including drone abstraction and multi-drone missions. Near year end, we showed both capabilities in another demonstration with three independent drones tracking a single target.
We also began to explore some of the performance limitations of the SteelEagle architecture and implementation – first, through motion-to-photon latency measurement and analysis to determine sources of delay in the end-to-end pipeline and then by developing and testing two defined benchmarks for drone obstacle avoidance and object tracking. This work had a substantial impact on the performance of the platform and helped us learn a lot about what really drives mission success.
Finally, we built a proof-of-concept for scene change detection, Quetzal, a key use case for small autonomous drones. Quetzal is very interesting and at an early stage.
See:
- Flying on Auto Pilot at the Edge (SteelEagle Blog)
- SteelEagle Autonomous Drone Demo (Short) (Long), April 4, 2024 @ Mill19, Pittsburgh, Pennsylvania. Demonstration of lightweight drones able to autonomously find and track subjects using edge computing.
- SteelEagle Autonomous Drone Swarm Demo (YouTube) (Website), November 1, 2024 @ Mill19, Pittsburgh, Pennsylvania. Demonstration of three lightweight drones simultaneously and autonomously tracking subjects using edge computing.
- The OODA Loop of Cloudlet-based Autonomous Drones
Mihir Bala, Aditya Chanana, Xiangliang Chen, Qifei Dong, Thomas Eiszler, Jingao Xu, Padmanabhan Pillai†, Mahadev Satyanarayanan, in 2024 IEEE/ACM Symposium on Edge Computing (SEC), December 2024 - Benchmarking Drone Video Stream Latency In SteelEagle
Chanana, A., Bala, M., Eiszler, T., Blakley, J., & Satyanarayanan, M. (2024). CMU Technical Report CMU-CS-24-128 - Quetzal: An Interactive System for Drone Video Frame Alignment & Detection of Salient Changes
The Hawk project focuses on live learning at near real-time in distributed physical environments using small “scouts”. In 2024, the Hawk team identified a set of use cases where learning quickly is necessary for success. They tend to be cases where scouts on a mission must identify potential threats to their survival in time to avoid them. Those threats may not be easily identifiable at the beginning of a mission. The Hawk platform provides tools to allow the scouts to become better at threat identification during the mission.
To better characterize these use cases, we developed the concept of Survival Critical Machine Learning (SCML). This concept provides an analytical framework for assessing the success of a multi-scout mission under threat and with the ability to learn to better identify threats during the mission. The team evaluated this model using the Hawk platform to understand the key factors for success and the value of live learning. Cite papers.
Beyond SCML, the Hawk team began adding a new data modality – radar – to the Hawk platform. Automated radar object classification is much less mature than visual object classification and is more challenging given the expertise required to classify objects in radar images for training sets. In 2024, the team reviewed existing machine learning research on radar data, began working with public and other radar datasets, and made early attempts to apply machine learning object classification models to radar.
See:
- Learning to Survive (SCML Blog)
- Beyond Federated Learning: Survival-Critical Machine Learning
Eric Sturzinger, Mahadev Satyanarayanan, in 2024 IEEE/ACM Symposium on Edge Computing (SEC), December 2024
- Edge-based Live Learning for Robot Survival
Sturzinger, J. Harkes, P. Pillai and M. Satyanarayanan, in IEEE Transactions on Emerging Topics in Computing, doi: 10.1109/TETC.2024.3479082. October 2024
Over the last few years, researchers everywhere have worked to understand how to use large language models (LLM) in various settings. We looked at them for edge computing. Our conclusion: they are best (today) as tools in the development cycle for edge native applications, not as real-time capabilities in latency sensitive applications. Yes, we have looked at scaled down LLMs at the edge, but they are generally not good enough or fast enough to replace cloud LLMs. Instead, cloud LLMs can help application developers find data, images, and code for their applications.
See: Creating Edge AI from Cloud-based LLMs
Dong, Qifei, Xiangliang Chen, and Mahadev Satyanarayanan. Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications. 2024.
Our research on Wearable Cognitive Assistance (WCA) continues with a focus on accelerating the creation of application specific training datasets for WCA applications.
See: tinyHulk: Lightweight Annotation for Wearable Cognitive Assistance
Chanh Nguyen, Roger Iyengar, Qifei Dong et al., 01 March 2024, PREPRINT (Version 1) available at Research Square.
INFRASTRUCTURE
The Living Edge Lab infrastructure took a generational leap forward with the launch of our 5G CBRS mobile private network at the historic Mill19 site – dropping our end-to-end latency from 30-90 milliseconds to 10-15 milliseconds. This network, built as a resource for CMU researchers, covers almost two miles of the Monongahela, one of Pittsburgh’s iconic “Three Rivers”.
See: Unlocking Edge Computing in the new Living Edge Lab 5G Network
INDUSTRY ENGAGEMENT AND IMPACT
We continued our focus on industry engagement with on-going projects with the US Army Artificial Intelligence Center (AI2C), IAI, Lockheed Martin, T-Mobile, and the Linux Foundation Magma Project. Some highlights:
- The Army AI2C continues to provide valuable feedback, guidance, and sponsorship for the SteelEagle project.
- The LEL and IAI are in a joint research collaboration focused on Hawk and related projects
- The LEL developed and delivered a Wearable Cognitive Assistance hands-on technology transfer course to a group of Lockheed Martin technologists
- T-Mobile and the LEL are exploring ways to reduce the round-trip latency to edge computing in T-Mobile’s network. This report includes some of that exploration.
- As one of the leading early adopters and users of the Magma wireless core, the LEL was asked to join the Magma Technical Steering Committee.
The “Just-in-Time Cloudlet” project inspired the launch of a JIT Cloudet product by industry partner Ampere’s customer Next Computing. Going back even further, Prof. Satya’s paper "Agile Application-aware Adaptation for Mobility” from 1997 received a Test-of-Time award from SIGMOBILE.
Finally, the 2024 offering of Carnegie Mellon University's edge-centric graduate level course, Mobile and Pervasive Computing, included two very interesting student projects. One project focused on running multi-modal deep learning models on edge devices to achieve near real-time inference for applications like zero-shot image captioning, object recognition, and image segmentation. The other project developed an edge-native application to navigate inside buildings using ORB-SLAM3. The archive of projects from 2024 and prior years is available here.