We make decisions in increasingly complex, highly uncertain, and dynamic environments that evolve over time in intricate ways. Most of the time, choices are distributed in space and time, and we must search for potential alternatives sequentially, exploring the environment to determine what is the best option at a particular time. Sometimes, information overload, time constraints, and other types of dynamic complexities challenge our abilities to process information and make accurate decisions. The focus of the DDMLab research is to develop theoretical understanding of the process by which humans make decisions in dynamic environments and to provide practical demonstrations of how this theoretical knowledge can be used to improve human dynamic decision making and general performance in a number of practical domains.
Our research approach involves laboratory experiments and cognitive computational models. Laboratory experiments rely on dynamic games in which humans make choices over time and space, individually and in teams, and from which we extrapolate robust phenomena and behavioral insights. Computational, actionable cognitive models, concretize the decision making process and the cognitive mechanisms involved into a computational algorithm. Ultimately, it is the combination of these two methods from which we have derived theoretical conclusions about dynamic decision making and spawned novel applications to provide potential solutions to major societal problems, including cybersecurity, phishing, climate change, and human-machine interactions.
The figure below represents our technical approach. Data are collected from two sources: a human interacting and making decisions in a task, and a computational model interacting and making sequential decisions in the same task. Predictions from computational models and observed data from humans are compared at many different levels (e.g., over-time learning and dynamic effects, overall averages of optimal behavior, overall risky behavior, variance in behavior, etc.). From the comparison of human and model choices, we derive conclusions regarding the human decision making process and corroborate or disprove the predictions that our theoretical principles propose.
Cognitive models, computational representations of human behavior, are developed according to Instance-Based Learning Theory (IBLT) (Gonzalez, Lerch, & Lebiere, 2003). Generally, IBLT proposes that decisions are made by generalizing the outcomes of past experiences (i.e., instances) according to the attributes' similarity to a current decision situation. An instance is a memory unit that results from the evaluation of potential options. Instances consist of three elements: state (a set of attributes that give a context to the decision); decision (the action taken); and utility (the expected or experienced value of the action taken in a particular state). The IBLT process involves a sequential evaluation of potential alternatives to determine the expected utilities of each of the options being evaluated. The model makes a decision to stop the evaluation of alternatives and select an option that corresponds to the maximum expected utility. Decisions are then updated according to environmental feedback.
Importantly, IBLT relies on the Activation Equation from the ACT-R architecture (Anderson & Lebiere). This equation represents basic memory effects such as: recency, frequency, similarities, and noise.
We have developed a platform for modelers interested in developing IBL models. PyIBL is an implementation of some of the IBLT's processes in Python. If you are interested in starting as an IBL model, visit https://www.cmu.edu/dietrich/sds/ddmlab/downloads.html. You can download PyIBL, its manual, and many examples of IBL models.