CMU Living Edge Lab News
Living Edge Lab Year-End 2023 MessageWe hope you all had a wonderful holiday season. At the Living Edge Lab (LEL), we enjoyed the brief respite between semesters for rest and recharge after a very busy year. As we get ready for another year, we want to give an update on 2023 LEL work and a preview of what's coming in 2024.
New Paper: Democratizing Drone Autonomy via Edge ComputingFully autonomous flight by low-cost, lightweight commercial off-the-shelf (COTS) drones could transform many use cases involving real-time computer vision. We show how such autonomy can be achieved using edge computing from a flight platform costing less than $800, and composed of a 320 g COTS drone with a 26 g COTS wearable device as payload. In spite of the extreme austerity of this platform and thermal limits on its LTE transmission, the system is able perform tasks such as detecting and then tracking a target. It is also able to visually navigate around obstacles. Such capabilities are only found on heavier and more expensive drones today.
New Paper: Psychological Science Meets Wearable Cognitive AssistanceA wearable cognitive assistant (WCA) is a computer-based application that guides a user through a task with input from wearable devices with the aid of computational resources in nearby locations (cloudlets). Psychological science informs development of WCAs and is encountering new issues for research. We discuss three relevant research areas: response time, action segmentation, and task comprehension. Klatzky, R. L., & Satyanarayanan, M. (2023). Psychological Science Meets Wearable Cognitive Assistance. Current Directions in Psychological Science, 32(6), 446-453.
Mobile and Pervasive Computing Student Projects Completed for 2023In December, the 2023 offering of Carnegie Mellon University's edge-centric graduate level course, Mobile and Pervasive Computing, completed including the usual always interesting student projects. This year's projects focused on a variety of topics include edge-native applications in augmented reality, search and rescue, large language models, and audio and video analytics. The archive of this year's projects including posters and video presentations is available. The archives extends back to 2015 providing a glimpse into the evolution of edge computing over the last eight years.
New Tech Report, Blog, and Webinar: The Just-in-Time Cloudlet: A Hotspot for the Edge of NowhereImagine using drones to inspect bridges and electric grid towers, mounting a search and rescue operation for a child lost in the woods, hosting a backcountry ski event for a weekend, and providing critical communication and application capabilities for forward military operations -- these are just a few situations where there is a need to deploy a modern compute and communication infrastructure very rapidly in areas with poor network coverage. We believe there are many use cases that could benefit from what we refer to as a “Just-in-Time Cloudlet”. The JIT Cloudlet bundles a mobile access network and edge-native applications into a small form factor. Over the last year, we’ve been closely working with Arm to develop a reference architecture and a JIT Cloudlet prototype based on commercial off-the-shelf technologies, open-source software, and a “Cloud-native at the Edge” design paradigm.
New Paper: Optimizing User Experience in Wearable Cognitive Assistance through Model Specialization (tinyHulk)Wearable Cognitive Assistance (WCA) is a rapidly evolving application that relies on accurate computer vision models for optimal performance and user experience. However, adapting these models to varying user workstation backgrounds can be challenging, as it often necessitates extensive data collection and model retraining. To address this challenge, we propose an approach that focuses on improving model specialization to enhance the accuracy of model inference. Our method eliminates the need to gather the entire training dataset from each individual end user. This not only reduces labor-intensive work but also minimizes bandwidth requirements for transferring data to remote servers for training. We successfully train specialized models that are tailored to the unique characteristics of each workstation. These specialized models consistently achieve competitive accuracy levels during model inference, comparable to the ground truth models trained with real data collected directly from the workstations, which ultimately enhances the overall user experience with the WCA application.
New Paper: Low-Bandwidth Self-Improving Transmission of Rare Training Data (Hawk)A severe bandwidth mismatch between incoming sensor data rate and wireless backhaul bandwidth often exists on unmanned probes when collecting new training data for machine learning (ML). To overcome this mismatch, we describe a self-improving ML-based transmission system called Hawk. Starting from a weak model that is trained on just a few examples, it seamlessly pipelines semi-supervised learning, active learning, and transfer learning, with asynchronous bandwidth-sensitive data transmission to a distant human for labeling. When a significant number of true positives (TPs) have been labeled, Hawk trains an improved model to replace the old model. This iterative workflow, called Live Learning, continues until a sufficient number of TPs have been collected. For very rare events on challenging datasets, and bandwidths as low as 12 kbps, a team of 7 probes using Hawk discovers up to 87% of the TPs that could have been discovered via full preview, transmission and labeling of all mission data. Hawk also uses diversity sampling and few-shot learning.
New Blog: A Ladder to the Shoulders of Giants (Olive)Much of today’s scientific research depends on digital laboratories – computing hardware and software that process the experimental data gathered by researchers. Yet, hardware and software evolve at a rapid pace and reproducing the results from a past experiment done even a couple of years ago is a tenuous proposition. Unless the original processing environment is maintained intact, even the original researchers may not be able to reproduce the exact results. And, the difficulty in creating an identical processing environment may preclude independent researchers from verifying the results of their peers. Our recent paper, Towards Reproducible Execution of Closed-Source Applications from Internet Archives, recounts the history of our ten year effort, known as Project Olive, to address the problem of processing environment archiving in service of reproducibility. It also discusses our recent work in this area which expands the problem of archiving to the problem of archive access – making it easy for a researcher to access and run the processing environment while still protecting the software and data from unauthorized use.
New Blog: Safekeeping Faces at the Edge (Silent Witness)As so often happens, we can create technology far faster than our social and legal systems can adapt. And, as is so often necessary, we must adapt technology to meet compelling societal needs as they emerge. For the ethical questions raised by face recognition, a more direct intervention to align with emerging norms and policies will likely be required. But, there are legitimate uses for the implicated technologies. Interventions shouldn’t excessively impede these legitimate uses. We started the Silent Witness project at the Living Edge Lab in anticipation of these requirements. We believe that, while norms and policies will vary around the world, four computer vision technology principles will be common.
New Paper: Towards Reproducible Execution of Closed-Source Applications from Internet ArchivesOlive enables execution of closed-source applications decades after their creation. With appropriate authentication and authorization, anyone on the Internet can execute any archived application with no more effort than a mouse click. User experience is good, even for an interaction-intensive application. Olive uses virtual machine (VM) technology to encapsulate legacy software, including the operating system and all layers above it. If the legacy hardware is already obsolete at curation time, an emulator for it on more modern hardware can be included within the VM image. This paper is an experience report on the decade-long evolution of this concept.
New Paper: Tactical agility for AI-enabled multi-domain operationsCommanders must remain agile and adaptive in the future Artificial Intelligence (AI)-enabled multi-domain battlespace, where critical decisions are made at the tactical edge. Over-reliance on static, cloud-centric approaches to Machine Learning Operations (MLOps) compromises such agility and adaptability. These systems must operate in a dynamic threat environment, and learn to detect novel threats during operation. They must be able to perform this learning through the execution of tactical MLOps under austere and degraded conditions, especially limited wireless network bandwidth. In response to these requirements, this paper describes Hawk, a system that leverages edge proximity for rapid and iterative execution of the Observe stage of the Observe-Orient-Decide-Act loop. Central to this architecture is the use of tactical cloudlets. These mini data centers provide cloud-like computing resources without the communication latency to exascale data centers. Hawk enables a human to guide MLOps at low cognitive load, thus enabling an operational objective to be achieved at speed and scale while remaining usable and explainable.
New Paper: Semantic Fast-Forwarding for Video Training Set ConstructionWe introduce the concept of semantic fast-forwarding of video streams for efficient labeling of training data for activity recognition. We show that this concept can be realized by combining deep learning within individual frames, with spatial and temporal entity-relationship reasoning about detected objects. We describe a prototype that implements this concept, and present preliminary experimental results on its feasibility and value.
New Paper: V-Light: Leveraging Edge Computing For The Design of Mobile Augmented Reality GamesWe explore the future of synchronous, multiplayer mobile AR gaming through our game V-Light, which extends current mobile AR game capacities using edge computing. Mobile AR games are currently limited by on-board processing power, while offloading operations to the cloud introduces high latency costs. This is a critical issue for games needing real-time response to player input. V-Light demonstrates how mobile AR games can leverage the power of edge computing, bringing computational resources closer to the user, keeping latency low and bandwidth high. We share our development toolkit, analyze the design and development of V-Light through the lens of an existing model for shared-world mobile AR, and demonstrate that edge computing can provide a “time machine” that lets game designers prototype mobile AR games for devices that do not yet exist.
New Blog: Easing the Pain of Creating Anywhere (EdgeVDI)The Covid-19 pandemic forced most members of the university community to work from their homes and other locations with highly variable last-mile connectivity. Since access to campus was limited, they also became more dependent on Virtual Andrew to perform their academic and administrative work. The change from on-campus LAN-access to off-campus WAN access exposed limitations of VDI as a remote service. These limitations led to a three-way collaboration between VMware, the CMU Living Edge Lab, and CMU Computing Service to investigate how Edge Computing could enable a better VDI experience. Our recently published technical report, Deploying Edge-based Virtual Desktop Infrastructure, details this collaboration (so far).
New Technical Report: Deploying Edge-based Virtual Desktop InfrastructureCarnegie Mellon University's (CMU) Virtual Andrew service uses VMware Horizon Virtual Desktop Infrastructure (VDI) in coursework, research, and administration. It provides pre-configured, no-install access to a variety of restricted-license applications, such as computer-aided design (CAD) tools. This service is typically used from on-campus computing clusters and faculty/student offices with LAN connectivity to CMU's private cloud. The Covid-19 pandemic forced most members of the university community to work from their homes, with highly-variable last-mile connectivity. The change from on-campus LAN-access to off-campus WAN-access exposed limitations of VDI as a remoting service. These limitations led to a three-way collaboration between VMware, the CMU Living Edge Lab, and CMU Computing Service to investigate how Edge Computing could enable a better VDI experience. This technical report discusses our learnings.
Living Edge Lab Year-End 2022 MessageAs we near the end of 2022, we'd like to take this opportunity to wish you and your families a joyous holiday season and a wonderful new year in 2023. We'd also like to take a moment to provide you an update on LEL activities, and a preview of what's coming next year.
Mobile and Pervasive Computing Student Projects Completed for 2022In December, the 2022 offering of Carnegie Mellon University's edge-centric graduate level course, Mobile and Pervasive Computing, completed including the usual always interesting student projects. This year's projects focused on a variety of topics include edge-native applications in augmented reality, wearable cognitive assistance, and video analytics. There were also projects in edge infrastructure including cloudlet orchestration and application state transfer for mobile apps and other technology areas like explainable AI and low power wide area networks. The archive of this year's projects including posters and video presentations is available. The archives extends back to 2015 providing a glimpse into the evolution of edge computing over the last eight years.
New Paper: Accelerating Silent Witness StorageWe propose hardware acceleration for a new edge computing abstraction called a Silent Witness. This abstraction embodies a severe asymmetry in the ease of write versus read operations. Surveillance data from one or more video cameras are continuously encrypted and recorded, but the decrypting, processing, or transmission of that data only occurs under stringent privacy controls. For the new search workloads of such a system, decode-enabled storage alleviates the scalability bottleneck imposed by frequent decoding of data. Our experiments show throughput improvements up to 3.5X for typical search workloads of a Silent Witness.
New Blog, Paper, and Talk: SinfoniaAs edge computing emerges, it introduces new complexity. Now, a user’s proximity to edge computing resources matter. And, edge computing economics and market structure mean that the edge computing provider and the resources it provides can be quite variable over time and space. For the edge-native application provider, finding the “right” edge computing node or cloudlet to serve a specific user for a specific session can be non-trivial. Here at the Living Edge Lab, we believe that this complexity calls for a new abstraction at the boundary between edge-native application providers and edge computing infrastructure providers. Our Sinfonia project, led by Jan Harkes in collaboration with Meta and ARM, focuses on the development and evaluation of a framework for this abstraction.
New Tech Report: Sinfonia -- Cross-Tier Orchestration for Edge-Native ApplicationsThe convergence of 5G wireless networks and edge computing enables new edge-native applications that are simultaneously bandwidth-hungry, latency-sensitive, and compute-intensive. Examples include deeply immersive augmented reality, wearable cognitive assistance, privacy-preserving video analytics, edge-triggered serendipity, and autonomous swarms of featherweight drones. Such edge-native applications require network-aware and load-aware orchestration of resources across the cloud (Tier-1), cloudlets (Tier-2), and device (Tier-3). This paper describes the architecture of Sinfonia, an open-source system for such cross-tier orchestration.
Living Edge Lab Seminal Paper Receives Test of Time AwardSIGMOBILE has awarded the seminal edge computing paper, "The Case for VM-based Cloudlets in Mobile Computing”, a 2022 Test-of-Time award. These awards recognize papers that have had a sustained and significant impact in the SIGMOBILE community over at least a decade. The award recognizes that a paper's influence is often not fully apparent at the time of publication, and it can be best judged with the perspective of time. SIGMOBILE states: "What is known as edge computing today can be traced back to this Cloudlets paper, which proposed the need to support localized data and computation, thus avoiding the high latency and bandwidth limitations associated with traditional cloud computing."
New Technical Report: Segmenting Latency in a Private 4G LTE NetworkMany edge-native applications require low and predictable end-to-end network latency. In practice, many user-interactive edge applications must deliver less than 50ms round trip times (RTT) from the client device to an edge cloudlet and back to the client device to achieve acceptable user experience. More intensive interactive applications like virtual reality require less than 20ms RTTs. However, commercial 4G LTE networks fail to reliably meet these thresholds. This report summarizes the results of our measurement of the sources of network latency in an operational private outdoor CBRS 4G LTE network to provide a baseline for future network latency optimization.
New Video Interview: The Future of Data Management According to the Father of Edge ComputingThe increasingly endless pool of data that businesses generate is becoming increasingly difficult to sort and analyze. Businesses are inundated with data that compounds daily as IoT and AI technology continues to evolve. And unless a business is properly set up to collect, store, and analyze the data they collect, the data itself becomes virtually useless. Edge Computing versus the Cloud The solution – edge computing. “Edge computing is predicted to reach a market size of $40B by 2027 with a compound annual growth rate of over 30%. Today, every major IT company and telecommunications company has embraced edge computing. It’s important to understand that cloud computing is also growing, so this is not a zero-sum solution. The growth of edge computing does not come at the cost of cloud computing. Both will grow well into the future,” explained Mahadev Satyanarayanan (Satya).
New Paper: Balancing Privacy and Serendipity in CyberspaceUnplanned encounters or casual collisions between colleagues have long been recognized as catalysts for creativity and innovation. The absence of such encounters has been a negative side effect of COVID-enforced remote work. However, there have also been positive side effects such as less time lost to commutes, lower carbon footprints, and improved work-life balance. This vision paper explores how serendipity for remote workers can be created by leveraging IoT technologies, edge computing, high-resolution video, network protocols for live interaction, and video/audio denaturing. We reflect on the privacy issues that technology-mediated serendipity raises and sketch a path towards honoring diverse privacy preferences.
New Paper: Towards Open and Cross Domain Edge EmulationEdge computing brings resources nearer to end users and devices. Edge resources are heterogeneous and dynamic, presenting unique and competing challenges to researchers, network designers, and application developers. To meet these challenges, there is a critical ecosystem need for edge emulation capabilities. Several edge emulators exist however, most do not fully satisfy the needs of edge's various stakeholders. We present AdvantEDGE, an open mobile edge emulator that is feature rich while remaining flexible. AdvantEDGE enables diverse stakeholders to explore their respective disciplines while interacting with each other. In this paper, we summarize existing edge emulators, we present missing requirements and how they are fulfilled by AdvantEDGE and finally, we present research examples that were enabled via the use of the AdvantEDGE.
New Technical Report: Improving Edge Elasticity via Decode OffloadA New Carnegie Mellon University Technical Report Published, Feng, Z., George, S., Turki, H., Iyengar, R., Pillai, P., Harkes, J., & Satyanarayanan, M. (2021). Improving Edge Elasticity via Decode Offload. Visual analytics on recently-captured data from video cameras has emerged as an important class of workloads in edge computing. These workloads make intense processing demands on cloudlets, whose elasticity is limited by their smaller physical and electrical footprint relative to exascale cloud data centers. In this paper, we show how cloudlet elasticity can be improved by offloading visual data decoding. We define a new data access API that embodies decode offload, thereby avoiding application-level decoding of visual data. Using thermal, power density and data copying considerations, we identify cloudlet storage as the optimal location for placement of the decode function. Using a proof-of-concept implementation, we show that this approach can lower cloudlet CPU utilization by up to 50–80%, and deliver up to 3.5x improvement in the elapsed time of a typical visual analytics pipeline.
JMA Wireless, AWS complete private CBRS network for Carnegie Mellon labJMA Wireless, Amazon Web Services (AWS) and Crown Castle have completed a private LTE network deployment for Carnegie Mellon University using Citizens Broadband Radio Service (CBRS) spectrum. The CBRS network took less than three months to construct and commission, and first went live this summer in June. JMA and AWS teamed up to design and help build the network, which uses JMA’s XRAN virtualized RAN software with a Druid Software core running on AWS Snowball Edge. JMA outdoor CBRS radios were installed on campus alongside the vendor’s directional CBRS antennas deployed by Crown Castle. https://bit.ly/3kVLE3n
JMA WIRELESS, RUNNING ON AWS, PROVIDES NEW CAMPUS PRIVATE WIRELESS NETWORK TO CARNEGIE MELLONJMA Wireless (JMA) announced today the completion of a new private wireless network for Carnegie Mellon University (CMU) that will use Amazon Web Services, Inc. (AWS) to power ongoing research and innovation in CMU’s Living Edge Labs. https://bit.ly/3DJwZR4
Blog: The Quest for Lower Latency: Announcing the New Living Edge Lab Wireless NetworkIn edge computing, network latency is key. Edge-native applications often benefit from the lowest latency and the least latency variation possible. This blog recounts our successful efforts to build a private wireless network in support of our edge computing research. It also discusses some of our future plans.
Article and Video: Carnegie Mellon University lab thrives on the edge of computing innovation with Azure Stack HubResearchers at the Living Edge Lab at Carnegie Mellon University innovate on an exciting model for next generation cloud computing. Called edge computing, this paradigm—where data is processed locally before being uploaded to the cloud for further analysis—was first conceptualized by a group of mobile computing researchers, including Dr. Mahadev Satyanarayanan, (Satya), Carnegie Group University Professor of Computer Science, back in 2009. However, it’s only now, thanks to advancements in edge computing devices, that real progress can be made in creating edge-native apps with commercial potential. Staff and students at CMU’s Living Edge Lab, headed by Satya, use one such device—Microsoft Azure Stack Hub—to push the boundaries of edge computing and create prototypes of apps with real-world use cases. These scenarios run the gamut from situational awareness in military operations in a disconnected environment, to computer vision solutions for solving problems on the factory floor, to near real-time video analytics that help experts sift through reams of surveillance images to find a missing person.
New Paper: Ajalon: Simplifying the authoring of wearable cognitive assistantsA new paper: "Ajalon: Simplifying the authoring of wearable cognitive assistants." Software: Practice and Experience Wiley. Pham, T. A., Wang, J., Iyengar, R., Xiao, Y., Pillai, P., Klatzky, R., & Satyanarayanan, M Published: May 18, 2021 Wearable Cognitive Assistance (WCA) amplifies human cognition in real time through a wearable device and low-latency wireless access to edge computing infrastructure. It is inspired by, and broadens, the metaphor of GPS navigation tools that provide real-time step-by-step guidance, with prompt error detection and correction. WCA applications are likely to be transformative in education, health care, industrial troubleshooting, manufacturing, assisted driving, and sports training. Today, WCA application development is difficult and slow, requiring skills in areas such as machine learning and computer vision that are not widespread among software developers. This paper describes Ajalon, an authoring toolchain for WCA applications that reduces the skill and effort needed at each step of the development pipeline. Our evaluation shows that Ajalon significantly reduces the effort needed to create new WCA applications.
Press: The Edge of CloudIn November 2020, the Deloitte Cloud Institute invited Mahadev Satyanarayanan, the Carnegie Group University Professor of Computer Science at Carnegie Mellon University, as the first Deloitte Cloud Institute Fellow.1 Dr. Satyanarayanan (Satya as he’s popularly known) has made pioneering contributions to advance edge computing, mobile computing, distributed systems, and the Internet of Things (IoT) research focused on performance, scalability, availability, and trust challenges in computing systems. We examine the present and future of distributed edge-cloud architectures in an exclusive discussion with Satya; David Linthicum, chief cloud strategy officer, Deloitte Consulting LLP; and Myke Miller, dean, Deloitte Cloud Institute. Diana Kearns-Manolatos from the Deloitte Center for Integrated Research moderated this discussion. But first, a quick foundation on edge computing and five key insights from the discussion.
New Blog: The Waiting is the Hardest PartThis blog highlights some recent work by Manuel Olguin Muñoz, Roberta Klatzky, Junjue Wang, Padmanabhan Pillai, Mahadev Satyanarayanan, and James Gross at the Carnegie Mellon University Living Edge Lab and the KTH Royal Institute of Technology in Stockholm looked at these effects in the context of wearable cognitive assistance (WCA) applications, a class of augmented reality applications captured in the paper "Impact of delayed response on wearable cognitive assistance", Manuel Olguín Muñoz , Roberta Klatzky, Junjue Wang, Padmanabhan Pillai, Mahadev Satyanarayanan, James Gross Published: March 23, 2021 https://doi.org/10.1371/journal.pone.0248690
OpenScoutOpenScout is an edge-native application designed for automated situational awareness. The idea behind OpenScout was to build a pipeline that would support automated object detection and facial recognition. This kind of situational awareness is crucial in domains such as disaster recovery and military operations where connection over the WAN to the cloud may be disrupted or temporarily disconnected.
New Paper: Impact of Delayed Response on Wearable Cognitive AssistanceA new paper, "Impact of delayed response on wearable cognitive assistance", Manuel Olguín Muñoz , Roberta Klatzky, Junjue Wang, Padmanabhan Pillai, Mahadev Satyanarayanan, James Gross Published: March 23, 2021 https://doi.org/10.1371/journal.pone.0248690
New Paper: The Role of Edge Offload for Hardware-Accelerated Mobile DevicesA new paper, “The Role of Edge Offload for Hardware-Accelerated Mobile Devices”, Mahadev Satyanarayanan, Nathan Beckmann, Grace A. Lewis, Brandon Lucia, HotMobile '21: Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications, February 2021, https://doi.org/10.1145/3446382.3448360
Blog: Edge-Native App Design When You Can’t See Behind the CurtainAt the Living Edge Lab, we have been working on approaches that use benchmarking and simulation of edge-native applications in networks to understand the interplay between applications and their environments.
The Living Edge Lab Private LTE networkWhen Mahadev Satyanarayanan, group professor of computer science at Carnegie Mellon University and an expert in edge computing, needed a network to test his edge computing applications, he knew he wanted something that was low in latency, reliable and not susceptible to any performance degradation. Satyanarayanan turned to Amazon Web Services (AWS) and Federated Wireless for help, and the university decided to deploy an LTE private wireless network using Citizens Broadband Radio Services General Authorized Access (GAA) 3.5GHz spectrum.
Three Decades Of Vision For EdgeWith Mahadev Satyanarayanan (Satya) Of Carnegie Mellon University In this interview, Satya shares how he started thinking about distributed computing infrastructure for mobile devices back in 1993, how much of his vision has come true in the decades since, and his views on the future of cloud, edge, iot, and much more.
New Paper: Edge Computing for Legacy ApplicationsA new paper, "Edge Computing for Legacy Applications", by Mahadev Satyanarayanan, Thomas Eiszler, Jan Harkes, Haithem Turki, and Ziqiang Feng was published in IEEE Pervasive Computing, Volume 19, Issue 4, October-December 2020
New Paper: OpenRTiST -- End-to-End Benchmarking for Edge ComputingA new paper, "OpenRTIST: End-to-End Benchmarking for Edge Computing" by George, S., Eiszler, T., Iyengar, R., Turki, H., Feng, Z., Wang J., Pillai, P., Satyanarayanan, M. was published in IEEE Pervasive Computing, Volume 19, Issue 4, October-December 2020
New Technical Report: Simulating Edge Computing Environments to Optimize Application ExperienceA New Carnegie Mellon University Technical Report, Simulating Edge Computing Environments to Optimize Application Experience by James R. Blakley, Roger Iyengar, Michel Roy has been published.
News from 2019 and earlier
- Podcast Interview of Satya on Edge Computing (December 2019)
- Carnegie Mellon Researchers Tap Edge Computing to Resolve Real-World Challenges (April 2019)
Nice article from State Tech describing LiveMap and use of Edge Computing
- OpenDev blog entries and tweets about Elijah and Gabriel (September 2017)
At the OpenDev event on Edge Computing organized by the OpenStack Foundation, Satya presented a keynote talk and video demos of edge computing and wearable cognitive assistance. He also gave a longer talk at the end of the first day on edge computing infrastructure. Here are some blog entries and tweets that referenced the presentations and demos, giving an idea of how the world views this work:
- Gabriel on CBS 60 Minutes (October 9, 2016)
Wearable Cognitive Assistance can be viewed as "Augmented Reality Meets Artificial Intelligence". This 90-second excerpt from the October 9, 2016 CBS 60 Minutes special edition on Artificial Intelligence highlights the table-tennis wearable cognitive assistant on Google Glass.
- National Public Radio (WESA) segment on Gabriel (February 9, 2016)
This short (4-minute) NPR radio piece and associated web page on wearable cognitive assistance was broadcast in Spring 2016.
- "New AI Platform 'Gabriel' Will Whisper Instructions Into Your Ear" (December 3, 2015)
Article in Tech Times.