Research Activities at CMU-SV
The Silicon Valley campus is home to the Cylab Mobility Research Center as well as work on context-aware mobile systems, statistical methods, natural language translation, mobile health, security, hardware-optimization, and open source software environments. The campus is located at the NASA Ames Research Center to facilitate collaboration with NASA scientists.
Researchers, faculty and students have access to several laboratories on campus, including the Carnegie Mellon Innovations Lab, the Connected Embedded Systems Lab, the Smart Spaces Lab and the RF Lab, to develop new ideas. Students also work with faculty and industry partners on research projects, through practicums and assistantships. Many ideas that started at the Silicon Valley campus have gone on to become start-up companies.
Cylab Mobility Research Center
Director: Bob Iannucci
Research covers wireless security, mobile context awareness, airborne sensing, in-building sensing. As the MRC expands, we are exploring cross-layer mobile networking, platformization of the Internet of Things, camera as a primary input device for mobile phones, ultra-low-power computing and indoor positioning systems. Learn more
Current Research Projects
Bayesian Networks: Complexity of network computation, fault diagnosis, analytics
Understanding the Complexity of Bayesian Network Computation: This Project presents and analyzes algorithms that systematically generate random Bayesian networks of varying difficulty levels, with respect to inference using tree clustering. Our generation algorithms, called BPART and MPART, support controlled but random construction of bipartite and multipartite Bayesian networks. The results are relevant to research on efficient Bayesian network inference, such as computing a most probable explanation or belief updating. The results are also relevant to research on machine learning of Bayesian networks, since they support controlled generation of a large number of data sets at a given difficulty level. One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically the ratio of the number of a BN's non-root nodes to the number of root nodes, and the expected number of moral edges in their moral graphs. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. As well in this project we are concerned with the conceptual design of large-scale diagnostic and health management systems that use Bayesian networks.While they are potentially powerful, improperly designed Bayesian networks can result in too high memory requirements or too long inference times We investigate the clique tree clustering approach to Bayesian network inference, where increasing the size and connectivity of a Bayesian network typically also increases clique tree size. We provide both theoretical and experimental results.
Fault Diagnosis Using Bayesian Networks: Bayesian networks have established themselves as an indispensable tool in artificial intelligence, and are being used effectively by researchers and practitioners more broadly in science and engineering. The domain of system health management, including diagnosis, is no exception. In fact, diagnostic applications have driven much of the developments in Bayesian networks over the past few decades. In this chapter, we provide a gentle and accessible introduction to modeling and reasoning with Bayesian networks, with the domain of system health management in mind.Bayesian networks, which may be compiled to arithmetic circuits in the interest of speed and predictability, provide a probabilistic method for system fault diagnosis. Currently, there is a limitation in arithmetic circuits in that they can only represent discrete random variables, while important fault types such as drift and offset faults are continuous and induce continuous sensor data. In this project, we investigate how to handle continuous behavior while using discrete random variables with a small number of states.We demonstrate that PRODIAGNOSE, augmented with the CUSUM technique, is successful in diagnosing faults that are small in magnitude (offset faults) or drift linearly from a nominal state (drift faults). In one of these experiments, detection accuracy dramatically improved when CUSUM was used, jumping from 46.15% (CUSUM disabled) to 92.31% (CUSUM enabled).
Visual Analytics for Networks: Bayesian networks are a theoretically well-founded approach to represent large multi-variate probability distributions, and have proven useful in a broad range of applications. While several software tools for visualizing and editing Bayesian networks exist, they have important weaknesses when it comes to enabling users to clearly understand and compare conditional probability tables in the context of network topology, especially in large-scale networks. One focus of this project describes a system for improving the ability for computers to work with people to develop intelligent systems through the con- struction of high-performing Bayesian networks. It uses a “thought bubble line” to connect nodes in a graph representation and their internal information at the side of the graph. The tool seeks to improve the ability of experts to analyze and debug large Bayesian network models, and to help people to understand how alternative algorithms and Bayesian networks operate, providing insights into how to improve them. By selectively zooming in and zooming out visualizations,the fisheye technique allows users to study details while maintaining context. In this project, we introduce a multifisheye technique, which amounts to introducing several fisheyes in a visualization at the same time. Our multi-fisheye technique is based on partitioning the visualization’s display area and applying a fisheye algorithm inside each partition. We have demonstrate the potential of applying our multifisheye technique on social networks and Bayesian networks.
Software and Sensor Health Management: In this project, we focus on the use of Bayesian networks (BNs) to monitor the health of on-board software and sensor systems, and to perform advanced on-board diagnostic reasoning. Advanced compilation techniques are used to obtain a compact SSHM (Software and Sensor Health Management) system with a powerful reasoning engine, which can run in an embedded software environment and is amenable to V&V. We successfully demonstrate our approach using an OSEK-compliant operating system kernel, and discuss in detail several nominal and fault scenarios for a small satellite simulation with a simple bang-bang controller. In this project we use Bayesian networks to 1) monitor the health of the on-board software and sensor system and 2) perform advanced on-board diagnostic reasoning.We focus on the development of reliable and robust health models for combined software and sensor systems, with application to guidance, navigation, and control (GN&C).We propose that iSWHM (Integrated SoftWare Health Management) can increase safety and reliability of high-assurance software systems. iSWHM uses advanced techniques from the area of system health management in order to continuously monitor the behaviour of the software during operation, quickly detect anomalies and perform automatic and reliable root-cause analysis, while not replacing traditional V&V. Information provided by the iSWHM system can be used for automatic mitigation mechanisms (e.g., recovery, dynamic reconfiguration) or presented to a human operator. In this project a probabilistic approach, using Bayesian networks, to diagnosis and sensor validation, and investigate several relevant but slightly different Bayesian network queries.
BOSE: Building Occupancy Sensing Estimation
Building a Smart Community ontology
This project explores the world of the “Internet of Things,” where smart devices, sensors, and people interact with each other over time. Examination of numerous perspectives on relevant ontologies and information models led us to define an ontology with four core concepts: Event, Resource, Location and Service. This is a variation on the popular refrain of “People, Places, Things and Events” where we add the important concept of Service as a way of including the notion of activity. The ontology aligns with service-oriented concepts as well as serving as an event log. The second phase involves mapping arbitrary external data sources into our core ontology – a key capability for a world of heterogeneous, distributed stores. We have built a semi-automatic method of recognizing the structure of foreign data sources and then mapping them into the base ontology, with the option of including specialized extension ontologies (and supporting mapping files) that contain concepts that are unique to a given context. This framework should prove valuable to the integration of widely distributed data.
Circular compositional reasoning by learning and abstraction-refinement
- Corina Pasareanu
- Orna Grumberg (Technion)
- Sharon Shoham (Tel Aviv University)
Model checking is an automatic technique for the formal verification of concurrent systems. Despite its success, the technique suffers from scalability issues due to the state-explosion problem. To address the problem, we propose automated compositional verification techniques that employ circular assume-guarantee reasoning.
Assume-guarantee reasoning defines rules that break-up the global verification of a program into local, more manageable, verification of individual components, using "assumptions" about the rest of the program. Progress has been made on automating assumption generation using learning and abstraction-refinement techniques. That work has been done mostly in the context of a simple rule, where assumptions and properties are related in an acyclic manner. However, there are cases where "circular" dependency within a system is a real phenomenon that requires more complex, circular rules, which typically use inductive arguments. Although effective in scaling up verification, the applicability of these rules has been limited by the manual effort involved in defining the assumptions.
We propose to automate the assumption discovery process in the context of existing circular rules and of new rules, developed when needed. Abstraction and learning techniques will be used to iteratively build assumptions and refine them based on counterexamples obtained from checking components separately. Our algorithms will incorporate 3-valued reasoning to allow for more precise yet concise assumptions.
Our techniques will increase the assurance of general-purpose concurrent and distributed software, by scaling up existing verification techniques through novel automated circular compositional reasoning. We will also investigate two specific application areas, that may highly benefit from compositional reasoning: UML-based software and security protocols.
Collaborative scientific workflow design and management as a service
Modern science and engineering typically require support of collaboration and workflow/process. This project extends existing single user-oriented workflow tools to support collaborative design of workflows: 1) to support real-time co-design; 2) to track how a workflow has become as it is for who has done what among multiple contributors; 3) to capture and retrieve collaboration knowledge and decision making process. Reproducibility and scalability are two major targets demanding fundamental infrastructural support.
A community-driven workflow recommendations and reuse infrastructure
As current satellite measurements rapidly magnify the accumulation of more than 40 years of scientific knowledge, new discoveries increasingly require collaborative integration and adaptation of various data-driven software components (tools). In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. However, although computing technologies continue to improve, adoption via the sharing and reuse of modules and workflows remains a big challenge. This project tackles this challenge from a novel angle, to study how to leverage peer scientists' best practice to help facilitate the discovery and reuse of Earth science modules developed by others. Service classification, semantic discovery, recommendation, automatic composition, deployment and scheduling over the cloud are our research focus.
Cross disciplinary research on time aware applications, computers and communication systems
- Bob Iannucci
- Marc Weiss (NIST, Time and Frequency Division)
- John Eidson (University of California at Berkeley)
- Charles Barry (CTO, Jolata Inc.)
- Leon Goldin (Technical leader, Cisco Systems)
- Edward Lee (EECS Department, University of California at Berkeley)
- Kevin Stanton (Principal Engineer, Intel Corporation)
A new economy built on the massive growth of endpoints on the internet will require precise and verifiable timing in ways that current systems do not support. Applications, computers and communications systems have been developed with modules and layers that optimize data processing but degrade accurate timing. State-of-the-art systems now use timing only as a performance metric. Correctness of timing as a metric cannot currently be designed into systems independent of hardware and/or software implementations. To enable the massive growth predicted, accurate timing needs cross-disciplinary research to be integrated into these existing systems. Different criteria are needed for different endpoints on the network, such as: accuracy versus stability for time, phase and frequency synchronization. In addition to accuracy, security issues represent another critical need. In many cases, having assurance that the time is correct is a more difficult problem than accuracy. Security issues include protection from attack, and having means to verify that the time is correct.
Commercial mobile network architectures emerged from their wired counterparts in an evolutionary manner. Market pressures fueled upgrades in bandwidth and functionality. Decades later, these networks maintain historical artifacts from wired networks, and the artifacts work against fundamental needs of mobile systems. In this work, we step back and re-evaluate mobile network architecture, identifying inherent limitations and offering a new set of architectural principles that, we contend, will lead to significantly improved overall system performance. Based on these principles, we are creating the CROSSMobile architecture, enabled by controlled cross-layer information exchange between radio, network, and application layers (both on-device and in-cloud), coupled with information-owner-based privacy and security controls.
CROSSMobile is an open mobile architecture that can provide increased value to equipment and mobile device manufacturers, application and network service providers, and end users.
Crowdsourced data for earthquake monitoring and rapid response
A partnership with the US Geological Survey
- Bob Iannucci
- Sarah Minson (USGS)
- Ben Brooks (USGS)
- Jessica Murray (USGS
Federated platform for sensor data-centric service development and sharing
CMU SensorAndrew is the largest nation-wide campus sensor network. Collaborating with SensorAndrew, this project develops a Sensor Data Service Platform (SDSP) on top of SensorAndrew, with the following highlights: an SOA-supported sensor service discovery and provisioning layer; a novel approach to build social sensor service networks; in-memory database to support analytics of real-time streaming data; a dynamic virtual sensor to carry workflow provenance management and analytics. Mobile sensors and social media are integrated. Smart building and smart community are major application areas. Scalability, performance, security, and privacy are our major concerns.
Object/person positioning and navigation has many useful applications in emergency services, advertising and security. An inexpensive, easy to use wireless communication solution involving Infrared and Bluetooth Low Energy has applications in schools to track attendance, hospitals and home use.
Beacons in each room transmit a unique ID number via infrared light. A body-worn “badge” stores the unique ID number with an associated timestamp. Upon encountering a Bluetooth Low Energy enabled gateway, the “badge” transmits all data including inertial sensor data recorded at regular intervals. A Bluetooth Low Energy enabled gateway relays the data to the cloud, where the unique “beacon” ID number is translated into a real room name. The identity of the badge holder is not stored for privacy reasons. Devices can then connect to the cloud to find updated position information.
Information and influence propagation in social networks
The emergence of online social networks has posed new research challenges regarding the diffusion of information among humans in large networks. In particular, due to the existence of multiple online social networks, information is now likely to spread among the population in an unprecedented speed and scale. Research is needed to quantify this phenomenon by using both an analytical approach through synthetic network models as well as an empirical approach relying on the data from the real-world social networks. A related problem of interest is the diffusion of influence in social networks. Spread of influence, also known as complex contagion, refers to the phenomenon in which multiple sources of exposure to an innovation are required before an individual adopts a change in behavior. The study of these problems can make an impact more than ever these days as individuals and businesses are becoming increasingly aware of the fundamental role of online social networks as a medium for spreading information, ideas and influence. In particular, understanding the dynamics of these spreading events and can pave the way to develop better marketing strategies for products and ideas to go viral.
Thus far we have made good progress using analytical approaches and now slightly shifting the focus to the data-driven and algorithmic approaches.
Intelligent Dynamic Monitoring and Decision Systems (iDyMonDS)
Funded with a three-year, $1.2 million grant from the National Institute of Standards and Technology (NIST)
The multidisciplinary research team will is creating a test environment to demonstrate IT-enabled, data-driven protocols and the introduction of more interactive binding protocols between traditional utilities and new technologies — intermittent power and responsive demand, in particular — as a means to provide electric power reliably and efficiently. The "smart grid in a room" will be a test bed to see how cyber-physical systems interact with utilities, with the hybrid setup potentially realistically mimicking a large electric energy system. Data collected from real-world instrumentation would ultimately be able to help determine the value of new technologies and their impact on the quality and cost of electricity services, sustainability and potential for reducing pollution. The team will use this laboratory-level smart grid facility to guide the design and adoption of iDyMonDS-based protocols as the next generation utility Supervisory Control and Data Acquisition (SCADA) cyber-physical electric energy system architecture.
MARS real-time motion capture and muscle fatigue monitoring tool
- Pei Zhang
- Cynthia Kuo, Vibrado Technologies
- Quinn Jacobson, Vibrado Technologies
A multiple point body sensing system for fine grained skeletal muscle sensing — such as determining individual muscle activation/ relaxation or individual muscle fatigue. The system can also provide accurate body motion capture and tracking to determine poor posture/ exercise form as well as inefficiencies in human motion. It is designed for easy setup, simple to put on and take off; it will not impede action being performed and is easily expandable to monitor the many body segments and skeletal muscles.
MARS uses inertial sensors to detect body motion
- Accelerometer - Detect gravity
- Magnetometer - Detect absolute spatial direction
- Gyroscope - Detect body rotation speed
The three-tier system architecture connects the sensor node network to a mobile data aggregator, then to a back-end server for muscle activity recognition and motion tracking and visualization using an animated rendered model.
Mera: Memoized ranged systematic software analyses
Funded by the National Science Foundation
- Corina Pasareanu
- Sarfraz Khurshid (University of Texas at Austin)
As software pervades our society and lives, failures due to software bugs become increasingly costly. Scalable approaches for systematically checking software to find crucial bugs hold a key to delivering higher quality software at a lower cost. Mera is a methodology to scale model checking and symbolic execution which are two powerful approaches for systematic software analysis and known to be computationally expensive.
The project builds on two novel concepts: memoization, which allows re-using computations performed across different checks to amortize the cost of software analysis; and ranging, which allows distributing the analysis into sub-problems of lesser complexity, which can be solved separately and efficiently. Mera consists of three research thrusts. First, the core memoization and ranging techniques for model checking and symbolic execution are developed. Second, these techniques are optimized in the context of different kinds of changes, like the program code, expected properties, or analysis search-depth parameters. Third, these techniques are adapted to effectively utilize available resources for parallel computation using static and dynamic strategies, such as work stealing. Mera will help improve software quality and reliability thus holding the potential to provide substantial economic benefits and to improve our quality of life.
Parallel and distributed algorithms for intelligent systems
In the area of parallel and distributed algorithms for intelligent systems, we take advantage of opportunities that have emerged due to recent dramatic improvements in parallel and distributed hardware and software. The emergence of graphics processing units (GPUs) as a general computing platform is one important trend; another is the introduction of the MapReduce architecture along with its open-source Hadoop implementation. We have taken advantage of GPUs to perform faster junction tree propagation through parallelism. Compiling Bayesian networks (BNs) to junction trees and performing belief propagation over them is among the most prominent approaches to computing posteriors in BNs. We develop data structures and algorithms that extend existing junction tree techniques, and specifically develop a novel approach to computing each belief propagation message in parallel. We have so far achieved speedups up to approximately 10x; however the speedup depends strongly on the structure of the junction tree. Using MapReduce and Hadoop, we have developed developed Bayesian network parameter learning algorithms and shown how they can dramatically speed up machine learning, both for complete and incompleted data sets. We have also developed a distributed variant of the text mining system Unsupervised Semantic Parsing (USP), which we call Distributed Unsupervised Semantic Parsing (DUSP). DUS improves on USP's ability to handle large text corpora by distributing several of USP's key algorithmic steps over a cluster of commodity computers. In experiments with DUSP we processed a corpus that was over 13 times larger than the largest corpus we were able to handle using USP.
Robust connectivity in the presence of insecure and unreliable links in ad-hoc networks
This research area focuses on the connectivity and robustness properties of secure wireless ad-hoc networks by combining techniques from network science (e.g., graph theory), stochastic geometry, and probability theory. The main expected outcomes are practical design guidelines (in terms of network parameter choices) that can provably ensure the desired level of connectivity, security, and robustness in a network in the presence insecure and unreliable links. In particular, this research is motivated by the observation that current modeling techniques, which are predominantly based on random geometric graphs (RGGs), are insufficient to accurately model a real-world network -- In an RGG, nodes are placed in a Euclidian plane and two nodes are joined by a link if they are within a certain distance of each other. However, in wireless networks that utilize random key pre-distribution, two nodes that are close to each other in the RGG may not be able to communicate securely since they do not necessarily share a key. Furthermore, RGGs fail to model realistic wireless communication where links may also fail due to the presence of physical barriers between nodes or because of harsh environmental conditions severely impairing transmission.
A realistic model of robust wireless networks will have to address connectivity, security, and reliability of communication using intersection graphs, for example (1) the intersection of a RGG and a random key graph, to analyze the connectivity of secure wireless networks, and (2) the intersection of a RGG and an Erdös-Rényi graph, to analyze the connectivity in wireless ad-hoc networks where links are unreliable and may fail independently with a certain probability. Our research investigates the properties of intersection graphs using advanced methods and tools of random graph theory and stochastic geometry, and develops conditions on the network parameters (e.g., number of nodes, density of nodes in the deployed area, link failure probability, number of keys per node, key pool size), which ensures that the resulting networks are k-connected with very high probability.
Robust, optimal design of interdependent, multi-layer, and multiplex networks
Today's Internet is one of the largest and the most complex systems ever created. Complexity is now the limiting constraint in the design, engineering, and operation of large-scale networks and communication protocols. A direct consequence of complexity is that the correctness, security, and availability of the Internet cannot be guaranteed. For example, a failure in one part of the Internet may cause wide spread failures and performance degradation of critical services in a very different part of the network. A major consequence of such interdependence is that systems and networks are often more fragile in the face of node failures, attacks, and natural hazards than their isolated counterparts. For example, failures in one network or network node may propagate to other networks and vice versa, leading to a cascade of failures that could potentially collapse an entire infrastructure. However, most existing network-science research focuses on single, isolated networks, and thus lacks the methods and tools necessary to address vulnerabilities of even simple interdependent networks. In fact, limited preliminary research on networks of networks has already demonstrated great potential for advancing the state of the art in building robust and resilient systems. Preliminary findings suggest that there are unprecedented differences in the behaviors of networks of networks as compared to individual networks. For instance, CMU research has already demonstrated that a network design that is optimal in countering adversarial attacks in a single network could be a catastrophic choice for the resiliency of interdependent networks.
We work on several problems in these directions through interdisciplinary research combining theoretical analyses with empirical studies and algorithm design. Specific research problems include i) design parameters and conditions that achieve optimal robustness and resiliency in a network of networks against random attacks as well as targeted attacks, ii) accurate interdependent network models motivated by real-world applications, iii) accurate and realistic node-failure models inspired by practical cases, iv) new metrics to evaluate robustness in interdependent structures, v) algorithms to determine the vulnerable points in an interdependent network, vi) recovery and healing strategies for systems that are under attack, and vii) robustness analysis of systems that are under physical and cyber attack simultaneously.
Robust wireless communications
Understanding Next-Generation Jamming Attacks: Understanding and modeling jamming attacks has long been a problem of interest in wireless communication and radar systems. In wireless ad hoc, mesh, and sensor networks using multi-hop communication, the effects of jamming at the physical layer resonate into the higher layer protocols, for example by increasing collisions and contention at the MAC layer, interfering with route discovery at the network layer, increasing latency and impacting rate control at the transport layer, and halting or freezing at the application layer. Adversaries that are aware of higher-layer functionality can leverage any available information to improve the impact or reduce the resource requirement for attack success. For example, jammers can synchronize their attacks with MAC protocol steps, focus attacks in specific geographic locations, or target packets from specific applications. Moreover, these same offensive techniques can be used to defend against other threats.
Efficient Defense and Mitigation of Jamming Attacks: The introduction of software-defined radios into the marketplace allows for new types of software-defined signal processing, communication, and networking techniques to assist in mitigating jamming attacks in a variety of ways. Such approaches often involve the use of cross-layer information to correctly diagnose and repair the protocol stack or to share information and signals through network interfaces. We have studied the use of software filtering mechanisms to identify and eliminate jamming signal at the receiver without obliterating the desired signal, and we have demonstrated the value of such approaches in an SDR testbed implementation. In addition, we have developed cross-layer protocol modifications that use higher-layer indicators of network failure in cases where the attacks cannot be explicitly detected. Toward this end, we have shown that multi-channel communication, multi-path routing, and adaptive transport protocols can be employed to achieve a level of diversity that aids in mitigating jamming attacks, even when jammers are mobile and relatively stealthy.
Modeling Interactions between Attackers and Defenders: As described above, new software-defined radio and network capabilities allow attacking, greedy, or defending opponents to adapt their protocol operations or behaviors in reponse to detected threats or failures, or even proactively in a sort of moving target defense technique. The value of such adaptation is often clearly understood as long as the opponent is static, as convergencen properties can be well-defined in such a case. However, if multiple opponents are simultaneously adapting in response to each other, the system dynamics can not be as easily described. We thus aim to model such multi-player adaptation using a combination of game theory, control theory, empirical data analytics, and stochastic modeling.
Security and privacy in mobile devices
Telecommunication System Security: Telecommunication systems have evolved significantly and rapidly in recent years. Since much of the evolution has been pushed by customer demand for higher quality and faster data rates, service providers have often overlooked threats and vulnerabilities in their system designs in favor of faster response to customer needs. In an effort to push for more reliable and resilient telecommunications, we have studied a number of such threats and vulnerabilities that can be addressed using practical techniques that do not incur significant overhead or modification to production systems. At a more fundamental level, we are also investigating deeper redesign of the telecommunication infrastructure to study alternatives that could drastically improve service quality to users, costs to service providers, and capabilities for all parties.
Security & Privacy in Mobile Apps and Services: Smartphones have forever changed the mobile telecommunication and computing landscape. Mobile operating systems now support a diverse set of applications and services provided by major software providers as well as third-party developers around the globe. However, the unique system-of-systems nature of smartphones and tablets, comprising communication, networking, sensing, actuation, storage, navigation, and various other features, break the typical computer security, communication security, and network security definitions and force a drastic re-imagining of mobile security and privacy. Our work focuses on these aspects of mobile security that do not align with the existing definitions and models.
Secure and resilient networking
Cross-Layer Self-Organization for Survivable Wireless Networking: Wireless mesh and ad hoc networks are now being proposed for a number of critical data collection and dissemination scenarios, including traditional data and content systems as well as cyber-physical infrastructure (e.g., Smart Grid, Internet of Things). In many of these scenarios, timely delivery of data and control messages can be as critical as the messages themselves, so traditional security mechanisms are insufficient. We thus approach the problem from the network perspective, aiming to provide fast and efficient self-organizing, self-healing, and self-reconfiguring capabilities to allow wireless systems to seamlessly manage and heal themselves. Our approaches rely on cross-layer information sharing to provide efficient and scalable solutions.
Secure and Resilient Networking and Data Transport: Wireless networks enable flexible deployment, and wireless meshes enhance this flexibility by further eliminating the need for complete coverage by access points or base stations. Wireless meshes offer additional resilience in the form of path redundancy and diversity, providing stronger protections against network failure or denial-of-service. This resilience therefore enables support for unique applications and capabilities for emergency/disaster communication, underserved areas, or to provide cellular offload. We are developing an integrated architecture for heterogeneous mesh protocols and mesh nodes to form one unified mesh system to seamlessly support a wide variety of applications and usage scenarios, even in the face of mobility, malicious and selfish behavior, or strong (and potentially dynamic) policies on security, privacy, and anonymity. In this new architecture, we are investigating a variety of threats and issues in an effort to design a suite of protocols to provide end-to-end resilience and security.
Security and Privacy in Sensor-Actuator Networks: The deployment of wireless sensor-actuator networks for physical sensing and control introduces a number of unique challenges, namely due to the cyber-physical implications of sensing and actuation. Moreover, the small form factor, reliance on battery energy, deployment of unattended devices in potentially challenging or harmful environments, and criticality of timely delivery of control signals further complicate any approach to secure the sensing and actuation processes. Recently, sensor-actuator networks have found new applications in vehicular systems, smart environments, and immersive media applications. Each of these domains presents new requirements and challenges to security and privacy, and we are actively investigating many of them.
Semantic harmonization of smart grid concepts
- Principal Investigator: Steven Ray
- Partner: TopQuadrant, Inc.
- Sponsor: National Institute of Standards and Technology
This project concerns the application of semantic modeling techniques to demonstrate how the smart grid community can manage the large challenge of harmonizing definitions and relations among the several hundred standards that currently constitute the suite of smart grid standards under consideration or development. Our research captures definitions, relations and constraints in a machine-computable form (using ontology modeling and query languages RDF, OWL, SPIN and SPARQL) for one of these standards that addresses energy usage information. We have deployed the transformed model and a web application to query the model on the cloud for use by the smart grid community. The tool provides automated reasoning to uncover contradictions and inconsistencies in the standard, collect model statistics and metrics and eventually, align terms amongst different standards. We have found that obscure errors that had escaped notice during UML modeling were more easily discovered with this approach. Current work involves the transformation and modeling of additional smart grid standards to determine the generality of our approach.
A unique and low-cost controlled-mobile aerial sensor networking platform. The cloud sensing system uses miniature helicopters equipped with multiple sensors, such as a gyroscope and electronic compass sensor, to assist in emergency situations. A flock of these 29g autonomous helicopter nodes with communication, ranging and collaborative path determination capabilities, can be useful in sensing survivors after disasters or adversaries in urban combat scenarios.
Social networks: data analysis and machine learning
Social network analytics, machine learning, and visualizaiton is one of our research areas. We have analyzed a large mobile phone dataset, consisting of millions of call data records (CDRs), provided by one of the major telecom operators. Using this dataset, we have investigated the different social connections or ties that are reflected in such datasets.
Social ties defined by phone calls made between people can be grouped to various affinity networks, such as family members, utility network, friends, coworkers, etc. An understanding of call behaviour within each social affinity network and the ability to infer the type of a social tie from call patterns is invaluable for various industrial purposes. We analyzed thepatterns of 4.3 million phone call data records produced by 360,000 subscribers from two California cities, San Francisco and Modesto, and found features that are highly predictive of the type of social tie, independent of the city. Armed with this knowledge we also have identified promising machine learning classification approaches as well as several potential applications in telecom and security.
Stochastic Optimization Using Stochastic Local Search: Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results.For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs. Stochastic local search (SLS) algorithms have recently been proven to be among the best approaches to solving computationally hard problems. SLS algorithms typically have a number of parameters, optimized empirically, that characterize and determine their performance. To complement existing experimental results, we formulate and analyze several Markov chain models of SLS in this article. In particular, we compute expected hitting times and show that they are rational functions for individual problem instances as well as their mixtures.We believe that our results provide an improved theoretical understanding of the role of noise in stochastic local search, thereby providing a foundation for further progress in this area.
Stochastic Optimization Using Genetic Algorithms: Crowding is a technique used in genetic algorithms to preserve diversity in the population and to prevent premature convergence to local optima.The present work focuses on the replacement phase of crowding, which usually has been carried out by one of the following three approaches: Deterministic, Probabilistic, and Simulated Annealing. Theoretical analysis using Markov chains and empirical evaluation using Bayesian networks demonstrate the potential of this novel Generalized Crowding approach.This project also includes a novel niching algorithm probabilistic crowding.This project also identifies probabilistic crowding as a member of a family of algorithms, which we call integrated tournament algorithms. We also focus on niching using crowding techniques in the context of what we call local tournament algorithms. In addition to deterministic and probabilistic crowding, the family of local tournament algorithms includes the Metropolis algorithm, simulated annealing, restricted tournament selection, and parallel recombinative simulated annealing.In probabilistic crowding, sub-populations are maintained reliably, and we show that it is possible to analyze and predict how this maintenance takes place. We also provide novel results for deterministic crowding, show how different crowding replacement rules can be combined in portfolios, and discuss population sizing. A problem with the traditional evolutionary approach is this: As the number of constraints determined by the zeros in the conditional probability tables grows, performance deteriorates because the number of explanations whose probability is greater than zero decreases. To minimize this problem, we present and analyze a new evolutionary approach to abductive inference in BNs. Genetic algorithms typically use crossover, which relies on mating a set of selected parents. As part of crossover, random mating is often carried out. A novel approach to parent mating is presented in this work. Our novel approach can be applied in combination with a traditional similarity based criterion to measure distance between individuals or with a fitness-based criterion. In the domain of real function optimization, the experiments show that, as the degree of multimodality of the function at hand grows, it is convenient to increase the mating index in order to obtain good performance.
Survivable Social Network
The Survivable Social Network project helps communities in disaster situations to re-establish communications by providing a network of small nodes, each installed and maintained by regular citizens in their own neighborhoods. Each node provides neighbors with social networking communications on their smartphones, and ultimately neighborhood nodes will allow users to find the status of family members, report damage and provide help to neighbors as well as to get updates from the city, schools and other organizations. The SSN includes a Web application in HTML5 for users and a mesh network of access points to create the "survivable" network.
Verification and validation
System Health Management (SHM) systems have found their way into many safety-critical aerospace and industrial applications. A SHM system processes readings from sensors throughout the system and uses a Health Management (HM) model to detect and identify potential faults (diagnosis) and to predict possible failures in the near future In this paper, we will describe an advanced technique for the analysis and V&V of Health Management models.We are investigating the use of Parametric Testing (PT), which uses a combination of n-factor and Monte Carlo methods, to exercise our HM model with variations of perturbed parameters.Our approach can yield valuable insights regarding the sensitivity of parameters and helps to detect safety margins and boundaries.In this project we also present an architecture and a formal frame-work to be used for systematic benchmarking of monitoring and diagnosis systems and for producing comparable performance assessments of different diagnosis technologies. In this project we focus on the Advanced Diagnostics and Prognostics Testbed (ADAPT) at NASA Ames Research Center. The purpose of the testbed is to measure, evaluate, and mature diagnostic and prognostic health management technologies. This project also includes the testbed's hardware, software architecture, and concept of operations. A simulation testbed that accompanies ADAPT, and some of the diagnostic and decision support approaches being investigated are also discussed.