Carnegie Mellon University

Research

Carnegie Mellon’s Institute for Software Research (ISR) hosts an active research group with a highly interdisciplinary approach to software engineering. Indeed, we believe that interdisciplinary work is inherent to software engineering. The field of software engineering (SE) is built on computer science fundamentals, drawing from areas such as algorithms, programming languages, compilers, and machine learning. At the same time, SE is an engineering discipline: both the practice of SE and SE research problems revolve around technical solutions that successfully resolve conflicting constraints. As such, trade-offs between costs and benefits are an integral part of evaluating the effectiveness of methods and tools.

Emerging problems in the area of privacy, security, and mobility motivate many challenges faced by today’s software engineers, motivating new solutions in SE research. Because software is built by people, SE is also a human discipline, and so research in the field also draws on psychology and other social sciences. Carnegie Mellon faculty bring expertise from all of these disciplines to bear on their research, and we emphasize this interdisciplinary approach in our REU Site. Below, you'll find projects we are planning for Summer 2021.

Mentor: Brad Myers

Description and significance
When learning to use a new piece of software, developers typically spend a large amount of time reading, understanding and trying to answer questions with online materials. To help developers keep track of important, confusing, and useful information with the intent of sharing their learning with future developers, we developed a social annotation tool, Adamite, a Google Chrome extension. For more information, you can read our recently-submitted paper.

Student Involvement
While Adamite is successful as an extension, we want you to help us refine Adamite’s current features, and design and build new features depending upon your own interests. Possible project focuses may include: extending Adamite to work in a code editor like Visual Studio Code or on a mobile device, adding new features to make annotations even easier and more beneficial to the author and for subsequent users such as using intelligent techniques to cluster related annotations, studying developers’ actual usage of annotations during a software learning task, or porting Adamite for entirely new domains beyond programming, like shopping. Working on this project may result in being an author on a publication at a prestigious conference such as CHI, UIST, or CSCW, and we are hoping to release Adamite as an open source project for general use.

Mentor: Travis Breaux

Description and Significance
Consumer product and service design is shifting toward mass personalization, in which configurability is driven by the affective and cognitive needs of users. Software personalization, in particular, appears in the form of news and product recommender systems, and personalized search, but has the capacity to influence personalized learning [REF]. This shift has been enabled by pervasive and ubiquitous computing and user experience design, and supported by new types of sensor-based and app usage data. Consequently, increased personalization raises new challenges concerning user privacy [1], social justice and transparency [2].

What a stakeholder values when performing a task, and how they express these values, is a rich source of requirements that can be used to personalize software. One approach to discover values is to elicit stakeholder scenarios, in which the stakeholder describes the steps that they take while completing a task. Because stakeholder preferences can be vague, unstated or unclear [3, 4], tools are needed to guide stakeholders during elicitation. This research project applies natural language processing and question-answering techniques to collect and refine scenarios with the aim to discover stakeholder values and preferences for improved personalization.

Student Involvement
Students will have the opportunity to participate in all aspects of the research, including designing user studies to collect scenarios, annotating scenarios to build a reusable corpus for conducting experiments, and researching and implementing algorithms to analyze scenario texts. Analysis will aim to identify ambiguity and vague phrases in text, and generate questions to elicit improvements to text that resolve those entities and improve the extraction of value-based requirements.

[1] Alfred Kobsa, “Privacy-enhanced personalization,” Communications of the ACM, 2007.
[2] Davoud Mougouei, Harsha Perera, Waqar Hussain, Rifat Shams, John Whittle, “Operationalizing human values in software: a research roadmap,” Foundations of Software Engineering, pp. 780–784, 2018.
[3] Jaspreet Bhatia, Morgan C. Evans, Travis D. Breaux. “Identifying incompleteness in privacy policy goals using semantic frames.” RE Journal, 24(3): 291-313, 2019
[4] Jaspreet Bhatia, Travis D. Breaux, Joel R. Reidenberg, Thomas B. Norton. “A theory of vagueness and privacy risk perception.” IEEE RE, pp. 26-35, 2016.

Mentors: Vincent Hellendoorn and Rohan Padhye

Description and Significance
Writing tests is tedious, especially for large, complex systems. Tools that can generate them automatically are therefore a great asset, and these (esp. “fuzzers”) have indeed had great success at uncovering new bugs in critical systems. Unfortunately, they also do a horrible job at producing tests that are human-readable. This makes debugging the discovered faults difficult, and ultimately the test tools themselves unusable. This lack of readability is mainly a lack of good metrics: fuzzers optimize for properties like coverage, compilability, and bug-detection potential. To deal with human-relevant properties, we need to turn to deep, machine learning. In this project, we will use language models on code [1] to automatically improve generated test cases’ readability, by using a model trained on the vast amount of test code available in the open source domain. This model can act as a guide to transform test cases, from renaming its variables or annotating tests with comments [4] to completely overhauling its structure, without losing their original quality.

Student Involvement
This project is currently in its initial phases. Students will drive the major phases of this project:
1) Collect a large corpus of testable projects and their test suites from Github; automatically building as many of these as possible (typically, ca. 50%) using Docker containers.
2) Train a large, deep language model on their test code.
3) Augment one or more existing test generation tools, such as FuzzFactory [2] or Evosuite [3], incorporating the perceived readability according to the learned models into their objective function, either instead of, or besides traditional metrics like code coverage.
4) Finally, perform an evaluation to measure the similarity between auto-generated tests and human-authored tests, comparing both their readability according to the language model, and a (small) panel of human evaluators. A successful completion of this project will lead to producing automated tests that are indistinguishable from their human-authored counterparts.

[1] A. Hindle, E. T. Barr, Z. Su, M. Gabel, and P. Devanbu. On the naturalness of software. In Proceedings of the International Conference on Software Engineering (ICSE), 2012
[2] R. Padhye, C. Lemieux, K. Sen, L. Simon, and H. Vijayakumar. FuzzFactory: domain-specific fuzzing with waypoints. In Proceedings of the ACM on Programming Languages, (OOPSLA), 2019.
[3] Fraser, G. and Arcuri, A., Evosuite: automatic test suite generation for object-oriented software. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (FSE), 2011.
[4] Roy, D., Zhang, Z., Ma, M., Arnaoudova, V., Panichella, A., Panichella, S., Gonzalez, D. & Mirakhorli, M. DeepTC-Enhancer: Improving the Readability of Automatically Generated Tests. In The 35th IEEE/ACM International Conference on Automated Software Engineering. (ASE) 2020."

Mentor: Yuvraj Agarwal

Description and Significance
We are increasingly surrounded by IoT devices that sense a wide variety of information in their vicinity. Our lab has developed the "Mites.io" platform (check out our website) that allows sensing of a wide variety of physical phenomenon such as light, sound, vibrations, audio and wireless signals broadcast from devices in their vicinity. Furthermore, our machine learning stack converts these sensed values into intelligent inferences like "a phone rang" or "door open/closed" etc. We have deployed this infrastructure in an actual CMU campus building. Given the wide variety of sensed phenomenon, our goal is to study the privacy implications of this sensing infrastructure and devise new privacy abstractions, novel user interfaces to notify building occupants about sensors, privacy preserving data-analysis algorithms, mobile Apps to give users control on what they share with whom, etc. This will be the largest IoT infrastructure and testbed of its kind.

Student Involvement
The student involved will work closely with two faculty member and one or more of the PhD students on this project. There are numerous aspects of this project that the selected student can work on. For example, you could work on build the privacy primitives in our underlying Building operating system (GioTTO), including an App Service for 3rd party Apps on our platform. You could also work on implementing new privacy aware APIs that app developers will use to develop apps around the Mites data. Depending on the students interest and expertise, there are also projects around conducting user studies around Apps built on this infrastructure and accompanying Mobile Apps. We are looking for candidates who have strong CS or EE fundamentals and are strong at programming. Also, it would be a plus to have some experience in building systems and/or Mobile Apps. Most importantly, the candidate should be interested in building real systems and care about data privacy!

[1] "Ask The Experts: What Should Be On An IoT Privacy And Security Label?" Pardis Emami-Naeini, Yuvraj Agarwal, Lorrie Faith Cranor, Hanan Hibshi. Oakland S&P 2020 -- The 41st IEEE Symposium on Security and Privacy, 2020.

[2] "PrivacyStreams: Enabling Transparency in Personal Data Processing for Mobile Apps."
Yuanchun Li, Fanglin Chen, Toby Jia-Jun Li, Yao Guo, Gang Huang, Matthew Fredrickson, Yuvraj Agarwal, Jason I. Hong. Ubicomp 2017 -- ACM Transactions on Interactive, Mobile, Wearable Ubiquitous Technology (IMWUT), September, 2017.

[3]"Toward Building a Safe, Secure, and Easy-to-Use Internet of Things Infrastructure." Yuvraj Agarwal, Anind Dey. IEEE Computer 2016: IEEE Computer Society.

[4] CMU GIOTTO Stack: www.iotexpedition.org

[5] SynergyLabs Mites Stack: www.mites.io

Mentors: Justine Sherry and Weina Wang

Description and significance
Caches are key components of the computer systems toolkit, and they are widely deployed to improve throughput, hide latency, and/or reduce memory pressure. Textbooks tell us that cache requests result in one of two possible outcomes: cache hits and misses; however, when the inter-arrival time between requests is smaller than the latency to the backing store, it engenders a third possibility: delayed hits. Delayed hits subvert expectations of traditional caching models, causing existing cache replacement policies to make latency-suboptimal decisions [1].

This research seeks to derive deeper insights into the impact of delayed hits, with two possible directions: 1) a systems-heavy approach, involving the use of simulation/emulation to understand how delayed hits interact with real-world caching systems; or 2) a more theoretical approach, involving proving properties (e.g. NP-Hardness, competitive ratio) about the delayed hits problem and delayed hits-aware offline/online caching algorithms.

Student involvement
Students will first learn about latency-minimizing caching and why/how delayed hits arise. Students with an inclination towards systems research will then design and implement a C++ simulator which models delayed hits in the context of a multi-tier cache hierarchy. Finally, using this simulator, students will explore the impact of delayed hits and caching parameters on various cache replacement policies. Students interested in theory research will dive deep into caching theory and the delayed hits model in order to prove novel theoretical results related to this hitherto-unexplored phenomenon.

[1] Nirav Atre, Justine Sherry, Weina Wang, and Daniel S. Berger. 2020. Caching with Delayed Hits. In Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication (SIGCOMM '20).

Mentor: Jonathan Aldrich

Description and significance
We are exploring safer blockchain programming through a new smart contract language, Obsidian [1]. To motivate and direct our future work on the language, we would like to understand more about how programmers using an existing smart contract language, Solidity, use state machines and tokens representing money in their contracts. We would like to write an analyzer that finds state machines embedded in Solidity programs and looks for cases in which fields are accessed inappropriately or assets are accidentally lost due to a state transition that leaves them unreachable.

Student involvement
The student will take the lead in designing and carrying out a corpus study as described above, with direction and mentorship from the Obsidian team. It's not necessary to have done a corpus study before, but you should have an interest in programming languages, smart contracts, and empirical studies, as well as good system-building skills. We've had a lot of great involvement in the Obsidian project from past REU students, including 3 coauthors of the paper below [1].

[1] Obsidian: Typestate and Assets for Safer Blockchain Programming. Michael Coblenz, Reed Oei, Tyler Etzel, Paulette Koronkevich, Yannick Bloem, Brad A. Myers, Joshua S. Sunshine, and Jonathan Aldrich. To appear in TOPLAS, 2020.

Mentor: Heather Miller

Description and significance. Machine learning is changing how we interact with the world, but at great cost—privacy. Currently, the majority of machine learning applications require user data to be collected by large companies like Google, Facebook, etc, and stored in company data centers. In addition to using this data to make your apps more convenient, the data collected by these companies can be monetized in various ways—via ad technology, or outright sold to other companies. Is there a way to have the benefits of machine learning for improving user experience, but without having to give up a deluge of personal data to these companies? Federated machine learning is a branch of machine learning that aims to keep user data on users' devices, and to train complex models without having to collect user data. One example is Google's Gboard.

When Gboard shows a suggested query, your phone locally stores information about the current context and whether you clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboard’s query suggestion model. While large companies like Google are able to hire teams of researchers to experiment with federated learning in their products, federated learning remains challenging to developers at large. One issue is that experimenting with federated learning requires knowledge not only of machine learning techniques, but expertise in distributed programming and distributed systems as well, in order to manage the challenges of federated learning; data distributed unevenly among millions of devices, significantly higher-latency, lower-throughput connections, and only availability for training. In our research group, we're working on easy-to-use decentralized data replication techniques, like Conflict-free Replicated Data Types (CRDTs), to allow for conflict-free collaborative updates to a federated model. Our approach should make it possible to treat a federated model like any old data structure, but with aspects related to distributed systems taken care of by the abstraction. The goal of this research project is to experiment with a variant of CRDTs designed for federated learning, and to evaluate these data structures for use in federated applications.

Student involvement. Students will first learn about decentralized data replication techniques, as well as familiarize themselves with our prototype library implementing these techniques. From there, students will experiment with a number of federated learning applications, focusing on understanding the efficacy of these data structures from many angles, answering questions such as; how easy is it to get up and running with a federated applications with a CRDT approach versus other approaches? Do CRDT-based approaches converge at the same rate as other approaches? (Or faster or slower?) How much communication does our CRDT-based approach require relative to other approaches? Can communication be further reduced? What other performance-related characteristics are unique to our approach? E.g., running times, memory consumption, CPU utilization, I/O? Students will be expected to implement federated applications and evaluate them by answering questions like those listed above.

Mentor: Heather Miller

Description and Significance. Modern web applications are built in a client-server architecture, with servers in charge of user data, access control, and application logic. This can be suboptimal for end users, due to issues with privacy, data ownership, and the possibility that the service will be shut down. It is also difficult to create “open-source web applications” that can be modified and redeployed by community members, unlike open-source desktop applications. We are developing infrastructure for decentralized web applications that function without application-specific central servers, allowing users to maintain control over their data and modify programs at will. Our work centers on using decentralized data replication techniques, like Conflict-free Replicated Data Types (CRDTs) and causal broadcast, to let small groups of users collaborate on shared data, like the contents of a Google Doc or the messages in a group chat. The goal of this research project is to design and implement parts of an infrastructure for web developers that will let them easily build and deploy decentralized collaborative applications.

Student involvement. Students will first learn about decentralized data replication techniques, as well as familiarize themselves with our prototype TypeScript library implementing these techniques. From there, there are numerous projects that students could pursue to move decentralized web applications closer to reality. A student interested in systems could work on optimizing message delivery and ordering, using general-purpose servers, peer-to-peer technologies, or both, through a combination of experimentation and network protocol design. A student interested in more theoretical aspects could study and propose techniques for advanced features in our infrastructure, which are necessary to make it as convenient as ordinary client-server web applications. An example of such a feature is partial replication, in which some data is shared by only a subgroup of users (e.g., private messages sent as part of a broader collaborative program).

Mentor: Srinivasan Seshan

Description and significance
While recent advances in programmable switch ASICs enable programming the switch data plane, packet buffers on the ASICs still remain as a fixed-function hardware component. Because of this, most of the packet scheduling algorithms that can drastically improve network performance are hard or infeasible to be realized on today’s programmable switches. While recent work such as PIFO [1] has shown that it is feasible to implement various packet scheduling algorithms on hardware switches, it still requires hardware modifications. In this project, we will design and implement programmable packet scheduling buffers on today’s programmable switches without hardware modification by extending the switch resource augmentation architecture called TEA [2] that our group has developed. We will also demonstrate the feasibility and effectiveness of this approach by extensive evaluations using real hardware testbeds.

Student involvement
Students will first review various packet scheduling algorithms and identify the key primitives that programmable packet buffers must support. For better understanding, students will implement a software simulator to test a set of the algorithms implemented using the primitives. For hardware implementation, students will learn the TEA architecture and implement the primitives for packet scheduling on top of the system. Throughout the above process, students will learn how to do systems research (from problem formulation to system evaluation) and how to manage a project. Students will get familiarized with high-performance programmable network devices including programmable switches and network interface cards (NICs) and programmable languages for such devices (e.g., P4).

[1] Sivaraman, Anirudh, et al. "Programmable packet scheduling at line rate." Proceedings of the 2016 ACM SIGCOMM Conference. 2016.
[2] Kim, Daehyeok, et al. "Tea: Enabling state-intensive network functions on programmable switches." Proceedings of the 2020 ACM SIGCOMM Conference. 2020.

Mentor: Rohan Padhye

Description & Significance
Fuzz testing is a popular technique for finding software bugs and security vulnerabilities using randomized test-input generation. State-of-the-art fuzzing tools perform coverage-guided fuzzing: they evolve a corpus of inputs that maximize code coverage (e.g. lines of code executed) in the program under test. However, does more coverage mean better fault detection capability? Not necessarily [1]. This project aims to develop a new fuzzing technique that is guided by fault detection instead of coverage. We will take a sample program, seed fake faults using a well known technique known as a mutation analysis, and try to guide fuzzing towards exposing those seeded faults. We expect test inputs that can kill mutants (i.e., detect injected faults) to be more robust at finding real faults [2]. The main challenge in incorporating mutation analysis during fuzzing is being able to evaluate each candidate input on thousands of seeded faults in about the same time required to execute the original program just once.

Student Involvement
The project will involve the design and/or evaluation of highly efficient program analysis algorithms, software development in Java or C, analyzing real-world software (e.g. Apache Maven, Google Closure, OpenSSL), running experiments in the cloud, and perhaps even finding cool new bugs! As an example, a preliminary activity could involve naively combining a coverage-guided fuzzing tool such as JQF [3] with a mutation testing tool such as PIT [4]. This will likely work but be inefficient. The student might then investigate ways to make performance improvements or design new algorithms to reduce the overhead of mutation analysis while preserving fault-detection capabilities. Interested students should be comfortable or interested in hacking on non-trivial software systems. Some background with program representations (e.g. ASTs, JVM bytecode, or LLVM IR) will be helpful but is not required.

[1] L. Inozemtseva and R. Holmes. "Coverage is not strongly correlated with test suite effectiveness". ICSE 2014.
[2] R. Just, D. Jalali, L. Inozemtseva, M. D. Ernst, R. Holmes, and G. Fraser. "Are mutants a valid substitute for real faults in software testing?" FSE’14
[3] R. Padhye, C. Lemieux, and K. Sen. "JQF: Coverage-Guided Property-Based Testing in Java". ISSTA’19 — https://github.com/rohanpadhye/JQF
[4] PIT Mutation Testing — https://pitest.org

Mentors: Jonathan Aldrich and Joshua Sunshine

Description and significance
Software verification is the process of ensuring that a software implementation does what it is intended to do (i.e., ensuring the software implementation adheres to its specification). Software verification is important for all software systems, but particularly so, for critical systems such as control systems for aircraft and nuclear power plants. However, static verification---the technique often applied to such systems---cannot support incrementality. In particular, static verification involves writing detailed specifications (often in a formal logic) on system components, such as pre- and postconditions for functions. To prove that a function adheres to its postcondition (e.g. that a findMax function returns the maximal element of a list), tools demand many additional specifications. Worse even, is that a single specification cannot be checked for correctness without providing all the required specifications.

In 2018, we introduced gradual verification [1], which supports the incremental specification and verification of programs through the sound and principled application of static (at compile-time) and dynamic (at run-time) techniques---static techniques are applied where possible and dynamic ones where necessary. As a result, a gradual verifier allows developers to pick and choose which properties and components of their system to specify without any unnecessary effort and receive immediate verification feedback. Since the introduction of gradual verification [1], we have extended the approach to support more practical programs, such as those containing recursive heap data structures (trees, graphs, lists, etc.) [2, 3], and we have implemented a prototype [6].

 

Student involvement
Students who work with us this summer will have the opportunity to work on either designing and implementing a frontend to our gradual verifier, studying the performance of the verifier and designing and implementing improvements, or studying the static and dynamic trends of gradual verification. All of these endeavors will allow students to gain experience with programming language design, software development, theory, formal reasoning, logic, and empirical studies. Additionally, we intend for students who work with us this summer to present and/or publish their work at conferences (as a few students have done in the past: [2, 4, 5, 6]).

[1] Johannes Bader, Jonathan Aldrich, and Éric Tanter. "Gradual Program Verification." In International Conference on Verification, Model Checking, and Abstract Interpretation, pp. 25-46. Springer, Cham, 2018.
[2] Jenna Wise, Johannes Bader, Cameron Wong, Jonathan Aldrich, Éric Tanter, and Joshua Sunshine. "Gradual verification of recursive heap data structures." Proceedings of the ACM on Programming Languages 4, no. OOPSLA (2020): 1-28.
[3] Jenna Wise, Johannes Bader, Jonathan Aldrich, Éric Tanter, and Joshua Sunshine. 2020. “Gradual Verification of Recursive Heap Data Structures.” First ACM SIGPLAN Workshop on Gradual Typing, January 25, 2020.
[4] Samuel Estep. "Gradual Program Analysis." Proceedings Companion of the 2019 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity. 2019.
[5] Samuel Estep, Jenna Wise, Jonathan Aldrich, Éric Tanter, Johannes Bader, and Joshua Sunshine. 2020. “Gradual Program Analysis.” First ACM SIGPLAN Workshop on Gradual Typing, January 25, 2020.
[6] Mona Zhang, and Jacob Gorenburg. "Design and implementation of a gradual verifier." In Companion Proceedings of the 2020 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity, pp. 31-33. 2020.

Mentors: Greg Ganger and Rashmi Vinayak

Description and Significance
We are exploring new approaches to explicitly factoring storage device heterogeneity into data reliability and placement decisions, which will enable large-scale storage systems to meet performance and reliability needs of its clients at lower infrastructure and energy costs. Large data centers include millions of storage devices---a mix of Flash-based solid-state disks (SSD), mechanical hard disks (HDD) and (very recently) non-volatile memory (NVM). In addition to the expected performance differences, on which most designers focus, there is a great deal of reliability and capacity heterogeneity across these device types and across makes/models/generations of each type. Our research aims at exploiting the various dimensions of heterogeneity to build heterogeneity-aware cluster storage systems that can better utilize the underlying devices. For example, based on real-world deployment logs at companies like Google, we have shown large reductions in capacity waste by adapting data redundancy schemes to device-specific robustness levels. We are also finding significant opportunities in changing data placement schemes to address device-capacity heterogeneity to greatly improve efficiency.

Student involvement
This research project is devising and evaluating new storage system designs to address and exploit storage device heterogeneity. Doing so involves a mix of processing of real system logs (device deployment, failures, and workloads), simulations driven by those logs, and system prototyping and experimentation via integration of heterogeneity-aware policies into a real distributed storage system -- HDFS. We plan to keep adding algorithms, policies and features to our current research which is focused on building redundancy and placement orchestrators for heterogeneity-aware cluster storage systems. Student researchers will directly contribute to the design, analysis and implementation of policies and algorithms of these orchestrators. There are many avenues for contribution, including prototyping of new policies/schemes, detailed experimentation with real systems, and data analyses of real system logs.

[1] PACEMAKER: Avoiding HeART Attacks in Storage Clusters with Disk-adaptive Redundancy. Saurabh Kadekodi, Francisco Maturana, Suhas Jayaram Subramanya, Juncheng Yang, K. V. Rashmi, Gregory R. Ganger. 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI'20), Virtual Event, Nov. 4–6, 2020.

[2] Cluster Storage Systems Gotta Have HeART: Improving Storage Efficiency by Exploiting Disk-reliability Heterogeneity. Saurabh Kadekodi, K. V. Rashmi, Gregory R. Ganger. 17th USENIX Conference on File and Storage Technologies (FAST '19) Feb. 25–28, 2019 Boston, MA."

Mentors: Wenting Zheng

Description and significance
Recently, there has been increased interest in collaborative machine learning [1], where multiple organizations run a training or a prediction task over data collectively owned by all of them. Collaboration is often advantageous for these organizations because it enables these parties to train models on complementary and larger datasets than what is available to any one organization, leading to higher quality models. However, potential participants often own sensitive data that cannot be shared due to privacy concerns, regulatory policies, and/or business competition.

There is increased interest in enabling secure collaborative computation by utilizing an advanced cryptographic primitive called secure multi-party computation (MPC) [1, 2]. MPC allows multiple parties to jointly compute a function without revealing their inputs to each other. We have developed Cerebro, an end-to-end collaborative learning platform that enables parties to compute learning tasks without sharing plaintext data. In order to allow users to express arbitrary learning tasks, Cerebro provides a compiler that can transform programs written in a high-level language into MPC. While the compiler contains many optimizations that result in an efficient MPC protocol, the optimizations can be further improved and expanded.

Student involvement
The student will first start by becoming familiar with the compiler by expanding the existing machine learning API using the compiler's domain-specific programming language. Next, the student will work on the compiler itself by improving and expanding the current optimizations: 1) add more support for conditional handling so that the compiler can parse and optimize multiple conditions in a single conditional statement 2) extend program splitting to handle more than matrix operations.

[1] Keller, Marcel, Valerio Pastro, and Dragos Rotaru. "Overdrive: Making SPDZ great again." Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer, Cham, 2018.
[2] Yakoubov, Sophia. "A gentle introduction to Yao’s Garbled circuits." (2019).

Mentor: Christian Kästner

Description and significance
The advances in machine learning (ML) have stimulated widespread interest in integrating AI capabilities into various software products and services. Therefore today’s software development team often have both data scientists and software engineers, but they tend to have different roles. In an ML pipeline, there are in general two phases: an exploratory phase and a production phase. Data scientists commonly work in the exploratory phase to train an off-line ML model (often in computational notebooks) and then deliver it to software engineers who work in the production phase to integrate the model into the production codebase. However, data scientists tend to focus on improving ML algorithms to have better prediction results, often without thinking enough about the production environment; software engineers therefore sometimes need to redo some of the exploratory work in order to integrate it into production code successfully. In this project, we want to analyze collaboration between data scientists and software engineers, at technical and social levels, in open source and in industry.

Student involvement
We want to study how data scientists and software engineers collaborate. To this end, we will identify open source projects that use machine learning for production systems (e.g., Ubuntu's face recognition login) and study public artifacts or we will interview participants in production ML projects. This research involves interviews and analysis of software artifacts. We may also develop exploratory tools to define and document expectations and tests at the interface between different roles in a project. The project can be tailored to the students’ interests, but interests or a background in empirical methods would be useful. Familiarity with machine learning is a plus but not required. Note, this is not a data science/AI project, but a project on understanding *software engineering* practices relevant to data scientists.

Mentor: Srinivasan Seshan

Description and significance
Low latency video streaming applications like AR/VR streaming and cloud streaming for video games have recently gained traction. Game streaming products like NVIDIA GeForce Now, Google Stadia and Microsoft xCloud aim to replace physical gaming consoles with powerful cloud servers and stream the gameplay to a variety of end devices like mobiles, TVs and low power laptops. Cloud streaming applications demand very low latency and uninterrupted video playback, which is a significant challenge in the face of packet loss that can happen due to various reasons like network congestion and wireless losses. A key challenge with handling packet loss is that there can be a large delay between the sending of a packet and detecting that the packet was lost. We have developed a streaming system that can significantly benefit from early packet loss predictions like ""there is a 50% chance of a packet drop in the next 3 RTTs"". The goal of this research project is to conduct a measurement study or use available data to design an ML/AI based packet loss prediction mechanism that can run in real-time and provide early packet loss forecasts to ensure smooth video playback in the presence of packet loss.

Student involvement
Students will first conduct measurement studies and aggregate available packet loss data from sources such as M-Lab and extract useful training data for packet loss prediction. This will involve developing simple network tools for running packet loss measurement experiments and using data processing tools like Big-Query to acquire and process data. The student will then design algorithms using the available data to predict future packet loss based on current network measurements like RTT and throughput. This step will require an understanding of basic ML/AI tools like neural networks and traditional AI algorithms. The implementation will involve the use of software like TensorFlow, PyTorch, Scikit-learn and Pandas. Students will gain valuable experience by designing, training and testing various types of real-time machine learning pipelines, which are valuable skills in today's computing landscape.

Mentors:  Joshua Sunshine, Keenan Crane, and Jonathan Aldrich

Description and significance
Every day, millions of professionals, teachers, and students in science, technology, engineering, and mathematics fields (STEM) work in the abstract world of mathematical structures, logical relationships, and computational processes. These abstractions are often presented in a textual and notation-heavy way, especially in formal literature, depriving readers of the powerful visual and spatial intuition that is fundamental to building deep understanding. People in STEM communicate informally by drawing diagrams, but they often omit these sketches in the literature because it takes a tremendous time of time and expertise to create mathematical diagrams [1]. Existing tools such as Mathematica, TikZ, and Adobe Illustrator lack the domain knowledge needed for creating mathematical diagrams, forcing the user to work at a low level.

Thus, our goal is to enable anyone to make beautiful diagrams by simply typing mathematical notation in plain text by building a platform called Penrose [2]. Penrose comprises three domain-specific languages, Domain, Substance, and Style, that are designed to enable users to write natural mathematical notation and separately specify the visual representation of notation in a domain, all in an extensible manner. Given a Domain, Substance, and Style program triple, the Penrose compiler transforms the problem of producing a diagram into a problem of numerical optimization, which is solved by an optimization runtime. The Penrose platform renders the solutions to the optimization problem as beautiful, automatic, custom mathematical diagrams.  

Student involvement
Students who work with us will join a highly interdisciplinary team spanning the fields of programming language design, software engineering, graphics, and visualization. Past students have had the opportunity to design and build fundamental parts of the Penrose platform, including the first Style compiler, a graphics API akin to “OpenGL for diagrams,” and an extensible syntactic sugar mechanism for mathematical notation. Future students will have many cross-cutting opportunities, including designing methods for automatic diagram optimization and layout, designing extensibility mechanisms for Substance and Style, working with mathematical domain experts to build Penrose's standard library, and conducting user studies on the usability of Domain, Substance, and Style. Students will gain experience in performing human-centered domain-specific language design, extending fundamental graphics methods such as optimization and sampling, and conducting interdisciplinary research.

[1] Ma'ayan, D., Ni, W., Ye, K., Kulkarni, C., & Sunshine, J. (2020, April). How Domain Experts Create Conceptual Diagrams and Implications for Tool Design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14).

[2] Ye, K., Ni, W., Krieger, M., Ma'ayan, D., Wise, J., Aldrich, J., ... & Crane, K. (2020). Penrose: from mathematical notation to beautiful diagrams. ACM Transactions on Graphics (TOG), 39(4), 144-1.

Mentors: Hanan Hibshi and Maverick Woo

Description and Significance
picoCTF is a free online Capture The Flag-style game implemented by and maintained at Carnegie Mellon University to promote and facilitate cybersecurity education. The target audience of our service has been middle and high school students since 2013, but it has also been increasingly popular and attracted many other age groups, including college students and adult learners. Each picoCTF release features ~100 cybersecurity challenge problems of increasing difficulty, which are revealed over a storyline to guide players to hone their skills one step at a time. These security challenges cover concepts such as cryptography, binary exploitation, web exploitation, reverse engineering, forensics, and other topics related to security and privacy. As a year-round education platform, the picoCTF team conducts constant research in areas including cybersecurity education methodologies, CTF problem development, and education platform development. This summer, we have multiple research projects for motivated students to get involved in various missions in picoCTF:

(1) Collect empirical data through surveys, user studies, and classroom observations
To improve the design of the picoCTF platform to reach a larger number of students, especially in under-resourced communities, we need to collect and analyze empirical data to inform our design enhancement and platform scalability process. This project includes a research plan to obtain the needed data through conducting user studies, focus groups, and usability and scalability tests that examine picoCTF in a classroom setting. We are interested in understanding how we can enhance our platform to better serve K-12 teachers and students, college students, and independent adult learners in under-resourced communities. The empirical data collected by closely observing participants in focus groups and from surveys and questionnaires will provide informed feedback to the picoCTF research team about possible technical challenges and key design improvements that would enhance the students experience.

(2) Analyze player data from previous years and develop visualization tools
In addition to the surveys and questionnaires, the current picoCTF platform is rich with player data from previous years that could be a valuable resource to conduct in-depth analysis that would help understand how to improve the game to reach a wider audience, and what/where to include new challenges. The analysis would help reveal patterns that could be mapped to educational goals and help investigate where players fail to solve challenges or where the game becomes less interesting. These findings could ultimately help improve the user experience and retainment. Other areas of analysis include performance by gender, team diversity, age, educational background, etc. We envision students will learn to use modern data analysis toolkits to analyze our data and build an interactive web-based exploration tool for presenting the findings from these analyses.

(3) Write new CTF challenges for the game and test current CTF challenges
Writing and testing CTF challenges are ongoing tasks in picoCTF. Testing current challenges help identify errors and bugs before a future competition goes live, and writing new challenges help increase our challenges pool. For the latter, we are especially interested in new challenges in areas that we have minimal or no existing coverage on our platform. These include privacy, mobile security (Android / iOS), IoT security (e.g., embedded Linux-based device), and ICS security (e.g., RTOS, ROS). Students will engage in learning security and privacy problems that arise in these areas and develop new CTF challenges of gradually increasing complexity to cater to players with different stages of technical capability.

Student involvement
We are looking for multiple students who are interested in cybersecurity and who enjoy working on projects that have a global impact on youth and workforce development. Dr. Hanan Hibshi and Dr. Maverick Woo will be the faculty mentors overseeing students research activities, and the picoCTF team consists of software engineers and graduate students who work together with student research assistants. The picoCTF project is interdisciplinary in nature and can be attractive to students with different backgrounds. Students with Human Computer Interaction background can enjoy conducting user studies, collecting the empirical data, or examining the picoCTF interface and propose design changes that can improve user experience. Students with interest in CS education and/or cognitive psychology could help analyze the data from existing players to investigate ways that can improve learning outcomes. Students who enjoy software development can help the technical team improve the current gaming platform and migrate into the new 2021 picoCTF that has advanced features. Finally, students with cybersecurity background can join our team and enjoy testing the current challenges (by playing the game!) and help create new challenges or add new category of challenges.

Mentor: Nathan Beckmann

Description and significance. Computer systems traditionally make a sharp distinction between processing and memory. All computation happens on processors, and all data resides in the memory hierarchy, and the two worlds only interact via a narrow read/write interface. The processing vs. memory distinction is useful, but leads to inefficiencies, which are becoming worse over time. Modern processors spend typically about half of their resources on the memory hierarchy -- but software has essentially no control over how these resources are used. This project explores the design of a "polymorphic memory hierarchy" that allows software to change the interface and behavior of the memory hierarchy. For example, application software can change the memory hierarchy to compress / de-compress data as it moves between levels of the memory hierarchy, or offload key computations to be performed in the memory hierarchy where they execute more efficiently.

Student Involvement. Students will explore the design of the polymorphic memory hierarchy by writing example applications and running simulations. Students should have some familiarity with computer systems, including a basic understanding of assembly language and C++ and, ideally, an introductory course in computer architecture. Extensive background in computer hardware is not required.

References
Livia: Data-Centric Computing Throughout the Memory Hierarchy Elliot Lockerman, Axel Feldmann, Mohammad Bakhshalipour, Alexandru Stanescu, Shashwat Gupta, Daniel Sanchez, Nathan Beckmann. ASPLOS 2020.

Mentor: Steven Wu

Description and significance
Many modern applications of machine learning (ML) rely on datasets that may contain sensitive personal information, including medical records, browsing history, and geographic locations. To protect the private information of individual citizens, many ML systems now train their models subject to the constraint of differential privacy (DP), which informally requires that no individual training example has a significant influence on the trained model. After well over a decade of intense theoretical study, DP has recently been deployed by many organizations, including Microsoft, Google, Apple, LinkedIn, and more recently the 2020 US Census. However, the majority of the existing practical deployments still focus on rather simple data analysis tasks (e.g., releasing simple counts and histogram statistics). To put DP to practice for more complex machine learning tasks, this project will study new differentially private training methods for deep learning that improve on existing state-of-the-art methods. We will also study how to use DP deep learning techniques to train deep generative models, which can generate privacy-preserving synthetic data—a collection of “fake” data that preserve important statistical properties of the original private data set. This, in turn, will enable privacy-preserving data sharing.

Mentor: Ruben Martins

Description and significance
Due to the Big Data revolution, many people with expertise in their respective domains have become data analysts. As a result, there is a growing population of non-expert database end-users that have limited programming knowledge. Although most users know how to make a description of what they want or what the task should do, sometimes they do not know how to express it in a query language, such as SQL, or even when they can produce a query it may be inefficient when run on large tables.

To help end-users to query a relational database, multiple systems have been proposed to automatically synthesis queries using input-output examples [1, 2]. These systems are based on program synthesis techniques where the goal is to automatically generate an SQL query that satisfies the example given by the user. However, these systems do not take into consideration performance when synthesizing a query. In this project, we propose to use program synthesis to optimize SQL queries. This approach contrasts with current approaches for query optimization that are based on rewrite rules [3] and may miss subtle optimization opportunities that can be found using program synthesis.

Student involvement
In this project, students will extend a program synthesizer for SQL [2, 4] that automatically synthesis SQL queries from examples. Given an existing query and a database, the new program synthesizer will find equivalent queries that have better performance than the original query. Students are expected to have taken a database course, to have a basic background in logic, and to be able to code in Python. Knowledge in program synthesis and/or program analysis is a plus.

[1] Chenglong Wang, Alvin Cheung, Rastislav Bodík:
Synthesizing highly expressive SQL queries from input-output examples. PLDI 2017: 452-466

[2] Pedro Orvalho, Miguel Terra-Neves, Miguel Ventura, Ruben Martins, Vasco M. Manquinho: SQUARES : A SQL Synthesizer Using Query Reverse Engineering. Proc. VLDB Endow. 13(12): 2853-2856 (2020)

[3] Surajit Chaudhuri. An overview of query optimization in relational systems. In PODS, pages 34–43. ACM Press, 1998

[4] https://squares-sql.github.io/

Mentor: Eunsuk Kang

Description and significance
Unintuitive, badly designed human-machine interfaces (HMI) are not merely an annoyance, but can pose significant risks to users in systems where safety is a key concern. For example, poor HMI design has been attributed as a major factor in numerous fatal accidents involving medical devices, automobiles, and aviation systems. The challenge is that humans are far from perfect and inadvertently make mistakes from time to time, but many interfaces are not designed to deal with such human errors; as a result, when a safety failure occurs, it is often the users who get blamed, even when a better designed interface could have prevented such a failure. To tackle this challenge, Dr. Kang is collaborating with researchers in human factors and cognitive science to systematically design HMIs that are robust against human errors [1]; i.e., an interface that is explicitly designed to recognize and handle potential human errors and prevent them from causing safety failures.

Student involvement
In this project, students will contribute to the development of a new methodology for designing safe and robust HMIs. Potential research tasks include: (1) developing a theoretical foundation for robust HMIs, (2) a mathematical approach for modeling human behavior and errors, (3) an automated technique for analyzing an HMI for its robustness against human errors, (4) an automated technique for repairing an HMI to improve its robustness, and (5) designing and running a user study to evaluate the HMIs. Students will also get a chance to apply these techniques to realistic case studies, including interfaces for medical devices and intelligent vehicles. Students are expected to have a basic background in logic and discrete math. Knowledge in formal methods, programming languages and/or HCI is a plus.

[1] NSF project description: Preventing Human Errors in Cyber-human Systems with Formal Approaches to Human Reliability Rating and Model Repair 

Mentor: Eunsuk Kang

Description and significance
Software is increasingly being used to control systems that closely interact with physical entities in the world, and whose failures may result in irreversible damage to users and our environment. These so called cyber-physical systems (CPSs), ranging from autonomous vehicles and drones to medical devices, pose new challenges in software engineering. Unlike traditional software applications, these systems are deployed in a highly dynamic, uncertain environment, and often rely on fragile assumptions about the behavior of users and other environmental agents (some of which may be malicious). Due to this dynamic, adversarial nature, software in CPSs must be designed to handle unexpected interactions with the environment, and guarantee an acceptable level of safety and security even in presence of component failures. Dr. Kang and his collaborators are working on (1) new principles and techniques for designing robust software [1,2], (2) algorithms and tools for reasoning about critical properties of CPSs [3,4].

Student involvement
In this project, students will develop new techniques for designing safe and secure CPSs. Potential research tasks include developing (1) a language for specifying the behavior of a complex CPS at a high-level of abstraction, (2) a technique for automatically analyzing a system for potential vulnerabilities, and (3) an architectural mechanism for guaranteeing safety and security even in presence of unexpected inputs. Students will also get an hands-on experience applying these techniques to real-world case studies, such as autonomous drones, intelligent vehicles, and medical devices. Students are expected to have a basic background in logic and discrete mathematics, and will learn about the-state-of-the-art techniques in formal methods, CPS design, software analysis, and programming languages.

[1] Property-Driven Runtime Resolution of Feature Interactions. Santhana Gopalan Raghavan, Kosuke Watanabe, Eunsuk Kang, Chung-Wei Lin, Zhihao Jiang, and Shinichi Shiraishi. International Conference on Runtime Verification (RV), 2018.
[2] An Architectural Mechanism for Resilient IoT Services. Hokeun Kim, Eunsuk Kang, Edward A. Lee, and David Broman. ACM Workshop on the Internet of Safe Things (SafeThings), 2017.
[3] Model-Based Security Analysis of a Water Treatment System. Eunsuk Kang, Sridhar Adepu, Daniel Jackson, and Aditya P. Mathur. ICSE Workshop on Smart Cyber-Physical Systems, 2016.
[4] Multi-Representational Security Analysis. Eunsuk Kang, Aleksandar Milicevic, and Daniel Jackson. Symposium on the Foundations of Software Engineering (FSE), 2016.

Mentor: Jonathan Aldrich

Description and significance
The Plaid research group, led by Dr. Aldrich, applies novel programming language ideas to software engineering challenges, validating those ideas both theoretically and empirically. With his students, he is currently designing the Wyvern programming language, focused on exploring new language features for security and adaptability. Wyvern provides a pure object model that explores foundational issues in object-oriented type tests [1] and a capability-based module system that can help to assure a broad range of architectural security properties [2]. It also provides an extensible syntax and type system, enabling convenient and safe command libraries that forestall command injection attacks [3, 4].

Student involvement
The projects described above include extensive involvement of undergraduates; in fact, every paper cited above includes an undergraduate co-author. Going forward, there are a number of promising areas where an undergraduate can effectively contribute to research in the span of a summer. In summer 2018, a REU student developed a prototype language for describing run-time, component and connector architecture in Wyvern; a future student could develop a similar language for describing module architecture constraints. A related project is enhancing Wyvern's module system, e.g. to support test cases as part of a module signature so that if there are multiple implementations of the module, they can be automatically compared for behavioral compliance with the test cases—thus avoiding problems with incompatible replacements or upgrades. We are also developing a foundational theory of gradual verification using separation logic [5], and an excellent undergraduate project would be integrating the theory into Wyvern (in a first summer) and an optimized implementation for run-time checking (in a second summer).


[1] Joseph Lee, Jonathan Aldrich, Troy Shaw, and Alex Potanin. A Theory of Tagged Objects. Proc. European Conference on Object-Oriented Programming (ECOOP), 2015.

[2] Darya Melicher, Yangqingwei Shi, Alex Potanin, and Jonathan Aldrich. A Capability-Based Module System for Authority Control. In Proc. European Conference on Object-Oriented Programming (ECOOP), 2017.

[3] Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, and Jonathan Aldrich. Safely Composable Type-Specific Languages. Proc. European Conference on Object-Oriented Programming, 2014. Distinguished Paper Award.

[4] Nathan Fulton, Cyrus Omar, and Jonathan Aldrich. Statically Typed String Sanitation Inside a Python. Proc. Privacy and Security in Programming (PSP), 2014. Best Paper Award.

[5] Johannes Bader, Jonathan Aldrich, and Eric Tanter. Gradual Program Verification. In Proc. International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), 2018.

Mentors: David Garlan, Eunsuk Kang, and Christian Kästner

Description and Significance
Many robotics systems are built on top of ROS, often in distributed settings, but security is often just an afterthought. In this research, we want to help engineers using threat modeling to identify and assure security requirements. Specifically, we will analyse the architecture and deployment of ROS-based systems (architecture recovery) and generate tests to assure that the system is securely configured (model based testing). The goal of the project is to secure robots against attacks, avoid misconfigurations, and provide confidence to developers that their changes when they evolve the robot do not break security requirements.

Student Involvement
Students will be involved in designing specific solutions for parts of the problem, for example, in generating and automating test cases, in describing and modeling security requirements, or in extracting architectural models from the implementation. In most cases this will involve building tools and evaluating them based on open-source robotics software components, e.g., around the TurtleBot. The ideal student has an interest in security, testing, or program analysis and is not afraid of reasoning about models of software systems. Experience with robotics and ROS is a plus.

Mentors: Christian Kästner and Bogdan Vasilescu

Description and significance
Reuse of open source artifacts in software ecosystems has enabled significant advances in development efficiencies as developers can now build on significant infrastructure and develop apps or server applications in days rather than months or years. However, despite its importance, maintenance of this open source infrastructure is often left to few volunteers with little funding or recognition, threatening the sustainability of individual artifacts, such as OpenSSL, or entire software ecosystems. Reports of stress and burnout among open source developers are increasing. The teams of Dr. Kaestner and Dr. Vasilecu have explored dynamics in software ecosystems to expose differences, understand practices, and plan interventions [1,2,3,4]. Results indicate that different ecosystems have very different practices and interventions should be planned accordingly [1], but also that signaling based on underlying analyses can be a strong means to guide developer attention and affect change [2]. This research will further explore sustainability challenges in open source with particular attention to the interaction between paid and volunteer contributors and stress and resulting turnover.

Student involvement
Students will empirical study sustainability problems and interventions, using interviews, surveys, and statistical analysis of archival data (e.g., regression modeling, time series analysis for causal inference). What are the main reasons for volunteer contributors to drop out of open source projects? In what situations do volunteer contributors experience stress? In which projects will other contributors step up and continue maintenance when the main contributors leave? Which past interventions, such as contribution guidelines and code of conducts, have been successful in retaining contributors and easing transitions? How to identify subcommunities within software ecosystems that share common practices and how do communities and subcommunities learn from each other? Students will investigate these questions by exploring archival data of open source development traces (ghtorrent.org), will design interviews or surveys, will apply statistical modeling techniques, will build and test theories, and conduct literature surveys. Students will learn state of the art research methods in empirical software engineering and apply them to specific sustainability challenges of great importance. Students will actively engage with the open source communities and will learn to communicate their results to both academic and nonacademic audiences.

[1] Christopher Bogart and Christian Kästner and James Herbsleb and Ferdian Thung. How to Break an API: Cost Negotiation and Community Values in Three Software Ecosystems. In Proc. Symposium on the Foundations of Software Engineering (FSE), 2016.

[2] Asher Trockman, Shurui Zhou, Christian Kästner, and Bogdan Vasilescu. Adding sparkle to social coding: an empirical study of repository badges in the npm ecosystem. In Proc. International Conference on Software Engineering (ICSE), 2018.

[3] Bogdan Vasilescu, Kelly Blincoe, Qi Xuan, Casey Casalnuovo, Daniela Damian, Premkumar Devanbu, and Vladimir Filkov. The sky is not the limit: multitasking across github projects. In Proc. International Conference on Software Engineering (ICSE), 2016.

[4] Bogdan Vasilescu, Daryl Posnett, Baishakhi Ray, Mark GJ van den Brand, Alexander Serebrenik, Premkumar Devanbu, and Vladimir Filkov. Gender and tenure diversity in GitHub teams. In Proc. ACM Conference on Human Factors in Computing Systems (CHI), 2015.

Mentor: Jonathan Aldrich

Description and significance
Many sensor network applications operate under severe timing and power constraints. Writing programs that meet these constraints is difficult even for expert software developers, and nearly impossible for the domain experts that want to use them in applications such as watershed management or urban traffic studies. We are developing the TickTalk language to make writing these programs simple. The user expresses timing constraints at a high level and the compiler and runtime develop a strategy for meeting them within the available power budget.

Student involvement
We expect to have a first prototype of a TickTalk compiler and runtime simulator this summer. An REU student could build a backend to a real hardware platform, such as the team's traffic sensors. Or, the REU student could explore applications in the domains above, or develop compiler/runtime optimizations related to timing and power. Even a human-subjects angle, studying the usability of TickTalk, is possible. The exact project will depend on the student's interest as well as project needs at the time, but there are lots of fun things to explore in this area!

TickTalk--Timing API for Dynamically Federated Cyber-Physical Systems. B Iannucci, A Shrivastava, M Khayatian. arXiv preprint arXiv:1906.03982.

Mentor: Srinivasan Seshan

Description and significance
The Internet as a shared resource relies on co-operative behavior among all traffic that share common network paths. Each flow on the internet must use a suitable congestion control algorithm to avoid congestion collapse and achieve a fair balance in the allocation of bandwidth. Classical congestion control algorithms like TCP-CUBIC were designed with the primary goal of sharing bandwidth and preventing congestion collapse, but suffer from high delays due to buffer filling behavior and are not suitable for low latency streaming applications like cloud game streaming. Today, a number of game streaming services like NVIDIA GeForce Now, Google Stadia and Microsoft xCloud use their own unique congestion control algorithms to provide low delay and high throughput. These algorithms may sacrifice properties like elasticity and friendliness with the wide variety of other congestion control algorithms that are used today like TCP-CUBIC and TCP-BBR to achieve their goals. The goal of this project would be to run tests on these services to understand the impact of the congestion control algorithms used in these services on video game QoE and on other flows that are using traditional congestion control algorithms.

Student involvement
Students will run experiments using the various streaming platforms over real and emulated networks to understand the behavior of the congestion control algorithms under various scenarios. The student will capture packet traces and analyze throughput and latency for many common congestion scenarios like where the bottleneck is shared by multiple game streams, and game streams along with flows using traditional congestion control mechanisms. The goal would be to analyze the impact of using a particular congestion control on the QoE of the game stream and the impact on other flows. The student will learn how to use tools like tcpdump, Wireshark, network emulators like MahiMahi and linux-tc, which are essential tools for networking research, and learn about video low latency streaming and factors that impact QoE in such applications.

Mentors: Christian Kästner and Bogdan Vasilescu

Description and Significance
Why do maintainers disengage from open source? Sustaining open source infrastructure requires continuous maintenance and often relies heavily on long-term contributors to do a bulk of the technical and nontechnical work. Just as turnover is expensive and disruptive for a company, losing a maintainer can be disruptive or even dangerous for an open source project, which may no longer receive updates and security patches or support its community. These disruptions can ripple through the entire ecosystem and affect others who depend on this project. While motivations and barriers to joining open source projects are well studied, we do not have a good understanding why maintainers leave. In talks, podcasts, and blog posts, we see many maintainers speak about concerns regarding funding, fairness, stress, and toxic interactions, but also classic turnover factors may be relevant, though not always a good fit. To create an equitable environment, we need a systematic understanding of disengagement factors in open source, and how they interact with funding models, corporate involvement, and project governance.

Student involvement
The involved students will build theory qualitatively analyzing reports of disengagement and will interview developers who have disengaged. We will start with a grey literature analysis to collect and systematically code talks, podcasts, and blog posts by open source practitioners who talk about leaving open source, switching projects/communities, and stress and other demotivators. We will complement the analysis of existing documents with interviews of active maintainers who have disengaged from a project or open source generally (which we identify from trace data on GitHub). Students will learn about qualitative empirical methods such as qualitative content analysis as analysis method. The ideal student has an interest in (qualitative) empirical methods around literature analysis and interviews, is not afraid of social science literature, and is motivated by supporting welcoming, equitable, and sustainable open source communities.

References
Paper: Courtney Miller, David Widder, Christian Kästner, and Bogdan Vasilescu. Why Do People Give Up FLOSSing? A Study of Contributor Disengagement in Open Source. In Proceedings of the 15th International Conference on Open Source Systems (OSS), pages 116--129, May 2019.
Blogs and talks: https://nolanlawson.com/2017/03/05/what-it-feels-like-to-be-an-open-source-maintainer/ https://medium.com/timrwood/moment-endof-term-522d8965689 https://www.jeffgeerling.com/blog/2017/dont-drown-your-open-source-project https://www.youtube.com/watch?v=tzFWz5fiVKU https://www.changelog.com/podcast/318

Mentor: Lorrie Cranor

Description and Significance
Many websites offer visitors privacy controls and opt-out choices, either to comply with legal requirements or to address consumer privacy concerns. The way these control mechanisms are implemented can significantly affect individuals’ choices and their privacy outcomes. We have conducted research to survey and evaluate existing controls on websites and apps and are designing prototype mechanisms to improve experiences related to consent and privacy. We are also taking this beyond the web to examine consent experiences for AR/VR, IoT, and audio devices. There is interest both from regulators and companies in developing guidance for usable consent, opt-out, and privacy choices interfaces.

Student Involvement
The student will work with the research team to survey existing consent experiences in a particular domain or service (e.g., Facebook, health apps, AR glasses), prototype changes to privacy and consent related controls, and conduct online user studies to test their effectiveness.
Students should have an interest in digital privacy and user study design. Prior exposure to HCI, such as an undergraduate HCI or interaction design course is helpful. Experience with UX design tools such as Adobe XD is also a plus but not necessary.

Hana Habib, Sarah Pearman, Jiamin Wang, Yixin Zou, Alessandro Acquisti, Lorrie Faith Cranor, Norman Sadeh, and Florian Schaub. “It’s a scavenger hunt”: Usability of Websites’ Opt-Out and Data Deletion Choices. CHI 2020. https://usableprivacy.org/static/files/habib_chi_2020.pdf

Hana Habib, Yixin Zou, Aditi Jannu, Neha Sridhar, Chelse Swoopes, Alessandro Acquisti, Lorrie Faith Cranor, Norman Sadeh, and Florian Schaub. An Empirical Analysis of Data Deletion and Opt-Out Choices on 150 Websites. SOUPS 2019. https://www.usenix.org/conference/soups2019/presentation/habib
"

Mentor: Daniel Klug

Description and significance: Short-form video apps, foremost TikTok, are the newest and currently also the most popular social media platforms among younger people. The success of short-videos apps is largely based on their high accessibility and ubiquitousness in regards to online social interaction and participation. But a key element of TikTok is the app’s specific, yet, mysterious algorithm that caters individual video feeds for users based on their content consumption and browsing behavior. While first studies are looking to analyze the TikTok algorithm and some basic knowledge exists about it, we have only little understanding about what social media users know about socio-technical aspects of short-video apps when they consume and create video content. In the MINT Lab (http://mint-lab.org), we are using qualitative approaches, such as interviews and content analysis, to research users’ opinions, knowledge, and awareness in using highly popular short-video apps for communication, socialization, and entertainment. Possible research questions are: What are common user understandings of the TikTok algorithm? What are users’ ways of observing how the algorithm might work? How do users make use of the algorithm, how do they “trick” it? How do users develop video creation strategies related to the algorithm and how do they work out? The goal is to study and understand how humans as users interact with social technology and how the use of social media apps is connected to and integrated into our everyday life.

Student involvement. Students will learn how to design qualitative research projects and how to apply qualitative methods to research socio-technological aspects of social media use and engagement. This includes designing interview questions, finding and contacting potential interviewees, best practices for conducting interviews with users, and how to transcribe, code, analyze, and interpret interviews. Based on quality criteria for qualitative research, students will learn how to develop and validate hypotheses from interview data. The ideal student is familiar with social media platforms, has interest in qualitative research, and is open to conduct interviews.

Mentor: Bryan Parno

Description and significance
The Secure Foundations Lab is building a variety of systems (from cryptographic protocols to file systems) that are mathmatically proven to be correct, reliable, and/or secure [1], [2]. Many of these projects use a language called Dafny [3], which provides a relatively user-friendly way to develop verified
software. However, Dafny provides very little by way of a standard library, making it difficult to use out of the box for non-trivial projects.

Student involvement
Students will study existing standard libraries in verified and unverified languages to codify desirable properties. They will then draw on existing ad hoc libraries written in Dafny for project-specific purposes to create the first standard libraries for Dafny. In the process, students will learn about formal verification and theorem proving; no prior experience in these areas is required—just good taste and a desire to use tools that let you write perfectly correct code.

[1] Storage Systems are Distributed Systems (So Verify Them That Way!). Travis Hance, Andrea Lattuada, Chris Hawblitzel, Jon Howell, Rob Johnson, and Bryan Parno. Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI), November, 2020
[2] Vale: Verifying High-Performance Cryptographic Assembly Code. Barry Bond, Chris Hawblitzel, Manos Kapritsos, K. Rustan M. Leino, Jacob R. Lorch, Bryan Parno, Ashay Rane, Srinath Setty, and Laure Thompson. Proceedings of the USENIX Security Symposium, August, 2017
[3] Dafny: https://github.com/dafny-lang/dafny"

Mentor: Jonathan Aldrich

Description and significance
We are designing Wyvern, a next-generation programming language. Wyvern starts with the same core calculus as Scala, supporting both functional and object-oriented programming as well as advanced module systems. We've been improving that core by identifying small, clever restrictions to make typing decidable [1]. We're also exploring ideas like capabilities [2], effect systems [3], language extensions as libraries [4], and executable software architecture.

Student involvement
There are a number of potential projects, and the selection would depend on the student's interest and project progress as of the start of summer. We're interested in how high-level languages like Wyvern build on lower-level languages through foreign-function interfaces, in particular the implications for cross-platform library use and compile-time macros. Another idea is pushing forward our exploration of a powerful but decidable type system that supports multiparadigm languages such as Wyvern. Finally, it would be fun to explore efficient implementation of languages that have some of Wyvern's unique features. As the diversity of these ideas suggest, there is room for either more theoretical or more systems-focused projects, depending on the student's interests.

[1] Decidable Subtyping for Path Dependent Types. Julian MacKay, Alex Potanin, Jonathan Aldrich, and Lindsay Groves. Proc. ACM Program. Lang. 4(POPL):66, 2020.
[2] A Capability-Based Module System for Authority Control. Darya Melicher, Yangqingwei Shi, Alex Potanin, and Jonathan Aldrich. Proc. European Conference on Object-Oriented Programming (ECOOP), June 2017.
[3] A Case Study in Language-Based Security: Building an I/O Library for Wyvern. Jennifer Fish, Darya Melicher, and Jonathan Aldrich. Onward! 2020.
[4] Safely Composable Type-Specific Languages. Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, and Jonathan Aldrich. Proc. European Conference on Object-Oriented Programming, 2014.