Exploring ELLIS-Alicante/DDMLab Research Collaborations

Workshop 2023 - Abstracts & Contacts




Day 1 - March 7, 2023

Cleotilde (Coty) Gonzalez, Ph.D.

An Overview of Human-AI Teaming Research at the DDMLab

E-mail: coty@cmu.edu

Erin Bugbee

A Cognitive Model for Deciding When to Stop: Explaining Sequential Decisions and Accounting for Learning in Optimal Stopping Tasks

Many everyday decisions are sequential. They require that we evaluate alternatives over time and decide when to stop a search of options to make a conclusive decision. Research has shown that people do not optimize the stopping point of the sequential search process, and instead there is a bias from the optimal stopping point. Current psychological models suggest that humans adjust their aspirations throughout a sequence, using “thresholds." However, these models are limited in at least three ways: (1) they do not describe the cognitive process behind stopping decisions; (2) they assume that there is no learning from experience over repetitions of a task; and (3) they are limited to specific tasks and cannot generalize to other tasks without significant modifications. Our research provides an integrated cognitive account of the learning process of stopping decisions in sequential tasks. Based on a known theory of decisions from experience, Instance-Based Learning Theory (IBLT), we propose a generic inductive process to explain how these thresholds emerge. We show that the IBL model can provide an explanation of behavior. Our results demonstrate that the IBL model generates individualized thresholds similar to those generated by other models without assuming that people make decisions using thresholds. The model can also account for how people learn how to make optimal stopping decisions. In general, our approach provides an integrated cognitively plausible process through which stopping decisions are made in sequential decision tasks, proposing that thresholds emerge through learning from experience.

E-mail: ebugbee@andrew.cmu.edu

Jeffrey Flagg, M.S.

Theory of Mind Metrics and Individual Differences

Theory of mind (ToM) involves ability to infer the beliefs, desires, and intentions of others. ToM is a critical component of social understanding and has been widely studied in a variety of contexts. As artificial intelligence continues to develop, creating computational representations of ToM will be a significant challenge. Unfortunately, despite substantial interest, how to examine individual differences in neurotypical adults remains elusive. In this talk, we will describe our attempts to find metrics that address this measurement gap.

E-mail: jflagg@andrew.cmu.edu

Tyler Malloy, Ph.D.

Overcoming Biases and Constraints in Multi-Agent Games

Many tasks involving interactions between humans and AI can be thought of as analogous to different types of multiplayer games, whether cooperative, competitive, or a mix of the two. Current methods for achieving high performance in multiplayer games using AI rely on maximizing an objective for performance in the game. However, this approach can lead to issues in terms of generalizability, most notably in how well the AI can play games with and against humans. One source of improved generalizability that allows humans to excel in these situations is a lifetime of experience overcoming biases and constraints in the real world, which directs their learning and decision making in multi-agent games. In this talk I will discuss how biases and constraints in learning and decision making can positively impact the generalizability of human behavior in multiplayer games. Additionally, I will suggest that understanding how humans overcome these issues can direct the development of cognitive models of multiplayer dynamic decision making, as well as methods for Human-AI teams.

E-mail: tylermal@andrew.cmu.edu

Chase McDonald

Cooperative Partners for Human-AI Teaming

In systems with human-AI interactions, it is critical that we develop not only capable AI agents but also those that complement the behavior of their human counterparts. AI partners and teammates must be able to shift their behaviors and plans according to the observed behavior of human partners. In this talk, we will discuss ongoing work on the development and evaluation of autonomous systems that are reactive and adaptive to the behavior of their human partners.

E-mail: chasemcd@andrew.cmu.edu

Baptiste Prebot, Ph.D.

Towards Human-AI collaboration for autonomous cyber defense

The rapidly evolving cyber attacker capabilities presents cyber security researchers with two major challenges: (1) developing intelligent defense systems that are able to learn and understand the dynamic strategies of attackers to efficiently anticipate and counter their decisions, and (2) evaluating the capability of these intelligent defense systems to produce defense behaviors that are comparable to those of expert cyber defenders. We developed a research environment where cognitive defense agents (IBL agents), Humans, and teams of both can perform a cyber defense task. We offer 2 cyber games: the Interactive Defense Game (IDG) where human defenders can perform the same individual scenario simulated with the IBL model, and the Team Defense Game (TDG). The Team Defense Game (TDG) offers the possibility for a Human defender to team up with an AI. This structure allows for the study of IBL models as teammates in Human Autonomy Teams and their comparison to other types of defense algorithms and decision aids.

E-mail: Baptiste.Prebot@ensc.fr

Yinuo Du

Empirical Evaluation of Cyber Deception

Adversary emulation is commonly used to test cyber defense performance against known threats to organizations. However, designing attack strategies is an expensive and unreliable manual process, based on subjective evaluation of the state of a network. In this paper, we propose the design of adversarial human-like cognitive models that are dynamic, adaptable, and have the ability to learn from experience. A cognitive model is built according to the theoretical principles of Instance-Based Learning Theory (IBLT) of experiential choice in dynamic tasks. In a simulation experiment, we compared the predictions of an IBL attacker with a carefully designed efficient but deterministic attacker attempting to access an operational server in a network. The results suggest that an IBL cognitive model that emulates human behavior can be a more challenging adversary for defenders than the carefully crafted optimal attack strategies. These insights can be used to inform future adversary emulation efforts and cyber defender training.

E-mail: yinuod@andrew.cmu.edu

Maria José Ferreira

Creating Technology to Improve Social Agents

The advancement of technology, specifically in AI, robotics, and automation, presents opportunities to explore new forms of interaction and personalization that were not previously possible. With the integration of AI and machine learning algorithms, systems can now adapt to individual users' needs and provide personalized experiences. In this presentation, I will delve into two specific research projects that tackle the question of how we can use technology to enhance engagement and personalization for individuals. The first project focuses on using social agents to increase children's interest in natural heritage sites. By providing a relatable and interactive experience through social agents, this project aims to make learning about natural heritage more engaging and personal for children. The second project examines the impact of intergroup competition on cooperation in situations involving a collective risk. By understanding how competition can affect the way individuals interact and make decisions in group settings, this project aims to provide insights into how technology can be designed to promote cooperation and collaboration among individuals. Together, these projects offer valuable insights into how technology can be utilized to improve the user experience and enhance personalization in various settings.

E-mail: mjrf85@gmail.com

Thuy-Ngoc Nguyen, Ph.D.

Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models

Developing effective Multi-Agent Systems (MAS) is critical for many applications requiring collaboration and coordination. State-of-the-art Multi-Agent Deep Reinforcement Learning (MADRL) models struggle to perform well in coordination problems wherein agents must coordinate with each other without communication and learn from stochastic rewards. Humans, by contrast, often learn rapidly to adapt to nonstationary environments that require coordination among people. In this talk, motivated by the demonstrated ability of cognitive models based on Instance-Based Learning Theory (IBLT) that can capture human decisions in many dynamic decision-making tasks, we propose three variants of Multi-Agent IBL models (MAIBL). These MAIBL algorithms combine the cognitive mechanisms of IBLT and the techniques of MADRL models to deal with coordination MAS from the perspective of non-communicative (aka independent) learners. We demonstrate that the MAIBL models exhibit faster learning and achieve better coordination in a dynamic Coordinated Multi-agent Object Transportation Problem (CMOTP) with various settings of stochastic rewards compared to current MADRL models.

E-mail: thuyngon@andrew.cmu.edu


Day 2 - March 8, 2023

Nuria Oliver, Ph.D.

ELLIS and ELLIS Alicante

E-mail: nuria@ellisalicante.org

Aditya Gulati

Human Cognitive Biases and AI

Human perception, memory and decision-making are impacted by tens of cognitive biases and heuristics that influence our actions and decisions. Despite the pervasiveness of such biases, they are generally not leveraged by today's Artificial Intelligence (AI) systems that model human behavior and interact with humans. My research focuses on bridging this gap. We propose that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases. To tackle this problem, we have proposed a new taxonomy of existing cognitive biases that is centered on the perspective of AI systems, identified three broad areas of interest and outlined research directions for the design of AI systems that can account for our biases. Currently my research is focused on the attractiveness halo effect i.e., the cognitive bias that makes us associate positive attributes with people who are perceived as attractive.

E-mail: aditya@ellisalicante.org

Piera Riccio

Artificial Intelligence, Beauty and Culture

E-mail: piera@ellisalicante.org

Julien Colin

Human-centric algorithmic transparency

AI systems have become the de-facto tools to solve complex problems in computer vision. Yet, it has been shown that these systems might not actually be safe to be deployed in the real world, as they too often tend to rely on dataset biases and other statistical shortcuts to achieve high performance. A growing body of research thus focuses on the development of explainability methods to better interpret these systems, to make them more trustworthy. In this talk, I will first give a general overview of the methods commonly used in Explainable AI, before discussing the challenges that still need to be overcome by the community.

E-mail: julien@ellisalicante.org

Adrian Arnaiz

Algorithmic fairness

Machine learning models are becoming major tools for addressing complex social problems and are also increasingly used to make or support decisions about individuals in many consequential areas of their lives, from justice to healthcare. It is therefore necessary to consider the ethical implications of such decisions, including concepts such as privacy, transparency, accountability, reliability, trustworthiness, autonomy, and fairness. Specifically, we will explain the current landscape of algorithmic fairness in AI, i.e., that algorithms make unbiased decisions without discrimination. We will go from the reasons why these algorithms make these biased decisions, to ways to solve this problem. In addition, we will comment on how algorithmic fairness is also present in the valuation of our data or in social networks. The main goal is to provide an overview of what algorithmic justice is, as well as the main technical and social challenges that the community has to address.

E-mail: adrian@ellisalicante.org

Gergely Nemeth

Privacy-Preserving Federated Learning

Federated Learning (FL) was proposed as a privacy preserving technique. Keeping the training data on the user’s device and only sharing the model updates with the server sounds promising for privacy protection and it aligns well with data protection acts. Since then many work discussed it’s vulnerability showing that in some cases FL is worse in preserving privacy than centralized machine learning. Here we summarize these problems and try to answer the questions: when should a client participate in FL? And what can the designer do in terms of privacy to motivate the clients to accept the collaboration?

E-mail: gergely@ellisalicante.org

Kajetan Schweighofer

Modeling Uncertainty in ML Models

Quantifying uncertainty is important for actionable predictions in real-world applications. This is of utmost importance in high stake applications, such as medical diagnosis or drug discovery, where human lives or extensive investments are at risk. Furthermore, foundation models or specialized models that are obtained externally, become more and more widespread, also in such high stake applications. It is crucial to assess the robustness and reliability of those unknown models before applying them. However, quantifying predictive uncertainty requires certain design choices that are probably not satisfied by externally obtained models. In this talk, we will give an introduction to current methods for quantifying predictive uncertainty, with a focus on how to apply them to deep learning models. Furthermore, we will talk about Adversarial Models, a new method we recently introduced to quantify predictive uncertainty of a given, pre-selected model.

E-mail: kajetan.schweighofer@jku.at

Kaylin Bolt

Data and AI to inform policy making during the COVID19 pandemic

The COVID-19 pandemic created a pressing need for policymakers to turn to non-traditional data and analyses techniques where they were previously inaccessible and/or not meaningful. The urgency of a global pandemic accelerated the use of data sources in ways that had not widely been used for public health purposes including unorthodox data on 1) health, 2) mobility and geolocation, 3) economics, and 4) sentiments or attitudes. While this has introduced new opportunities for data-informed public health policies, it has also brought forth critical ethical considerations related to consent, privacy, equity, and bias, among others. There are important lessons to be learned in order to assess what these data and technologies offer in future contexts and how best to apply them. This presentation will summarize an ongoing case-study project exploring how non-traditional data sources were a part of government crisis response efforts during the COVID-19 pandemic within the regional communities of Piedmont, Italy and Comunidad Valenciana, Spain. Findings will be shared as available.

E-mail: kaylin@ellisalicante.org

Chiara Natali

Open, Multiple, Adjunct - A research agenda on Human-AI Collaboration Protocols

Human-AI Collaboration Protocols (HAI-CP) are an integrated set of rules and policies that stipulate the use by competent practitioners of AI tools, enabling effective hybrid decision-making agencies. The requirement of adjunction regards HAI-CPs in which the AI component is relegated to the edges of the process, under continuous human oversight. In this talk, researchers and developers of AI systems are invited to focus on the process-oriented and relational aspects of the joint action of humans and machines working together. This entails, among other things, the evaluation of human-plus-machine systems as a whole, to design HAI-CPs that not only improve AI usability, but also guarantee user satisfaction and human and social sustainability, mitigate the risk of automation bias, technology over-reliance and user deskilling.

E-mail: chiara.natali@unimib.it

Top