Organized by
Horacio Arlo-Costa
hcosta@andrew.cmu.edu
Department of Philosophy
Center for Formal Epistemology
Carnegie Mellon University
Cleotilde Gonzalez
coty@cmu.edu
Department of Social & Decision Sciences
Dynamic Decision Making Laboratory
Carnegie Mellon University

This workshop is sponsored by the Center for Formal Epistemology, which is based on the philosophy department at Carnegie Mellon University. The department has a long tradition of interdisciplinary work in bounded rationality, heuristics, and choice. One of the ideas of this workshop is to celebrate the memory and the work on Herb Simon, who was an active member of the department and the CMU community. Various members of the department remain interested in the program of bounded rationality that Simon proposed initially and this workshop intends to continue work in this direction.

The main topics are:
(1) The role of heuristics in choice and inference. Especially recent work about the priority heuristic
(2) Decisions from experience and description. Experimental and modeling results.
(3) Foundational issues about bounded rationality.

Invited Discussants-at-Large:

  • David Danks (Carnegie Mellon University)
  • Kevin Kelly (Carnegie Mellon University)
  • Richard Samuels (Ohio State University)

    This event is free and open to the academic community. No registration is required.

  • Time Program Title Presenter
    8:30-9:00 AM Gathering, breakfast, and introduction to the workshop

    Horacio Arlo-Costa
    (Carnegie Mellon University)

    9:00-9:45 AM
    Decisions from Experience: The Scope and Policy of Search

    In many decision-making situations, we cannot consult explicit statistics about the risks associated with our possible actions. In lieu of such data, we can arrive at an understanding of risky options by sampling from them. Based on the sampled information, we then can render a "decision from experience." One invaluable advantage of the decision-from-experience paradigm is that it lays bare what otherwise is often hidden: people's information search policy, both in terms of scope and process. Studies of decisions from experience have observed that people tend to rely on relatively small samples from payoff distributions (frugal search). Furthermore, these studies have revealed two prototypical search strategies: repeatedly switching back and forth between two options versus evaluating each option independently and transitioning only once (search policy). This talk offers an explanation for people's frugal search and reveals how search policies affect the ultimate decision.
    Ralph Hertwig
    (University of Basel)
    9:45-10:30 AM
    A Simple Comparative Choice Heuristic for Play in 2x2 Games

    In this paper I present a simple model of strategy choice involving comparisons of payoffs across strategic options. The model implies that play may vary systematically with theoretically inconsequential changes in payoff values, predicts which equilibrium will obtain in games with multiple equilibria, predicts circumstances in which non-equilibria outcomes may predominate in certain games, predicts circumstances in which specific pure strategy outcomes will predominate in games with no pure strategy equilibria, predicts violations of iterative dominance and implies that play in games will be subject to framing effects. These predictions are tested and confirmed for five types of 2x2 games (battle of sexes games, matching games, iterative dominance games, stag-hunts, and games of pure conflict.
    Jonathan Leland
    (National Science Foundation)
    10:30-10:45 AM MORNING BREAK
    10:45-11:30 AM
    Fast and Frugal Heuristics in Perspective: Take the Best and the Priority Heuristic in Perspective

    The first part of the talk focuses on the heuristics known as Take the Best (TTB). In spite of the centrality of this heuristic in recent debates in psychology little is known about its mathematical properties. We try to fill this gap by examining the mathematical properties of the choice function it implements. We start by extending TTB beyond binary comparisons and thereupon characterizing it with respect to functional constraints. We focus on two such extensions, one in terms of maximization and another in terms of a bounded method based on successive selection from lists, as studied by Rubinstein and Salant in (2006). We offer a complete characterization of this second extension. Sometimes Gigerenzer suggests that there is a conflict between norms of rationality and his heuristics. He argues that the cognitive success of TTB and other heuristics is in a way linked to the violation of basic norms of rationality. Nevertheless the picture that emerges from our study of extensions of TTB is more nuanced and in a way conflicts with this radical view about rationality.

    While TTB is concerned with fast and frugal inferences, the Priority Heuristic as proposed by Brandstatter, Gigerenzer, and Hertwig (2006) is concerned with fast and frugal decision-making. We show that it is possible to extend the Priority Heuristic both beyond binary choice and to cases of uncertainty. We conclude with considerations about the range of applicability of the heuristics and a discussion of some objections presented in the literature (extended to the case of decisions under conditions of uncertainty).

    Work done in collaboration with Horacio Arló-Costa (CMU).

    Paul Pedersen
    (Carnegie Mellon University)
    11:30-12:15 PM
    Muddling Through and Making Do: Bounded Rationality and Its Impact on How We Decide

    My presentation will reflect on what we have learned about bounded rationality since Herb Simon coined the term in 1957. I will review the multiplicity of ways in which people have been shown to make judgments and choices and the challenges as well as opportunities this provides for predicting what people will decide in a given situation and for helping people make choices they are happier with in the long run. With attention and processing capacity in short supply, we use local context, current state, and recent experience to interpret information and assign importance to different subsets of goals. This gives rise to human performance that in some areas is unsurpassed (pattern recognition) and in other areas is frequently regretted after the fact (impatience or procrastination). The machinery and constraints of bounded rationality give rise to human judgment and choice that is both adaptive and at times inconsistent and suboptimal. I will also argue that deeper reflection on the nature of bounded rationality can not only improve the real world decisions of ordinary people, but also the choices made by decision researchers about how we allocate our scarce research attention and voice.
    Elke Weber
    (Columbia University)
    12:15-1:30PM LUNCH
    1:30-2:15 PM
    What is Quantum Cognition, and How Can It Be Used to Model Human Judgment and Decision Behavior?

    Judgment and decision making researchers face some of the same types of puzzling problems that forced physicists to abandon classical theory. Judgments are not simply recalled and recorded, but instead they are constructed on line, and these judgments can be incompatible so that the first judgment may disturb or interfere with a second. Thus only partial information about the whole cognitive system can be obtained at any point in time. Combining partial information about a system into a coherent understanding of the entire system is the hallmark of quantum theory. Quantum theory provides a fundamentally different approach to logic, reasoning, and probabilistic inference. In this paper, I will discuss (a) how quantum logic helps explain why people fail to follow the distributive axiom of Boolean logic; (b) how quantum probability helps understand why human judgments disobey the Kolmogorov law of total probability; and (c) how quantum theory helps predict when decision makers fail to obey the rational axioms decision theory.
    Jerome Busemeyer
    (Indiana University at Bloomington)
    2:15-3:00 PM
    Instance-Based Learning: Integrating Decisions from Experience in Sampling and Repeated Choice Paradigms

    In decisions from experience, there are two experimental paradigms: decisions from sampling and repeated choice. In the sampling paradigm, people are asked to sample between two alternatives as many times as they want, observing the outcome with no real consequences each time, and then to select one of the two alternatives for real (i.e., which cause them to earn or lose money). In the repeated choice paradigm, each selection of one of the two alternatives affects people’s earnings and they receive immediate feedback on obtained outcomes. These two experimental paradigms have been studied independently and different cognitive processes have often been assumed to take place in each of them. We argue that the two paradigms have common cognitive processes well represented and predicted by Instance-Based Learning Theory (IBLT). This research demonstrates that the same cognitive model based on IBLT captures people’s risk preferences in both paradigms better than individual models that have been created to account for each paradigm separately. Furthermore, we demonstrate that the model accurately predicts the sequence of sampling and repeated choice observed in human data.

    Work done in collaboration with Varun Dutt (CMU).

    Coty Gonzalez
    (Carnegie Mellon University)
    3:00-3:15 PM AFTERNOON BREAK
    3:15-4:00 PM
    Folk Choice Theory: Consequences of Gambling in a Structured Environment

    In life risk is reward. Almost always the big rewards we seek to gain (and the big losses we seek to avoid) are relatively unlikely to occur. This relationship between risk and return is obvious to the financial community and to lay people alike. Nevertheless theories of risky choice have largely ignored the impact of this relationship on choice. During this talk, I will develop an alternative descriptive theory of risky decision-making – folk choice theory -- that takes as a central premise that agents have a lay understanding of this risk/return relationship and use it to make more effective decisions in all types of risky situations. Folk choice theory offers a process level explanation of a range of phenomena. For instance, in decisions made under uncertainty, folk choice theory presumes decision makers use this risk/return relationship to infer the probability of an outcome. This process level hypothesis offers an explanation for several phenomena including the so-called Ellsberg paradox, and phenomena relating to a presumed non-linear probability weighting function. Folk choice theory also makes new testable predictions involving how agents may learn about the likelihood of events via the bets they are offered. Finally, I will discuss some of the methodological implications of folk choice theory including decision science’s over reliance on systematic designs that treat outcomes and probabilities as independent variables that can be manipulated independent of each other.
    Tim Pleskac
    (Michigan State University)
    4:00-4:45 PM
    Ameliorative Psychology and the Limits of Traditional Epistemology

    My main goals in this presentation are to show that the standard argument against naturalized epistemology has it almost exactly backwards, and that Statistical Prediction Rules represent many classes of formal methods that could supplement or replace Standard Analytic Epistemology (SAE). SAE names a contingently clustered class of methods and theses that have dominated English-speaking epistemology for about the past half-century. The major contemporary theories of SAE include versions of foundationalism (Chisholm 1981, Pollock 1974), coherentism (Bonjour 1985, Lehrer 1974), reliabilism (Dretske 1981, Goldman 1986) and contextualism (DeRose 1995, Lewis 1996). While proponents of SAE don’t agree about how to define naturalized epistemology, most agree that a thoroughgoing naturalism in epistemology can’t work.

    I will argue for the following five theses:
    1. The dominant theories of Standard Analytic Epistemology (foundationalism, coherentism, reliabilism, contextualism) have at their core a descriptive theory.
    2. This descriptive theory aims to capture the considered epistemic judgments of a small group of idiosyncratic people.
    3. The standard charge leveled against naturalistic epistemology can also be leveled against the dominant theories of Standard Analytic Epistemology: They attempt to extract prescriptions from descriptions.
    4. Some of the best psychological science of the past half-century is deeply normative and makes specific recommendations about how to improve our reasoning about matters of great practical significance.
    5. An approach to epistemology that takes seriously these psychological findings is better suited to overcoming the is-ought gap than are the theories of SAE.

    I will then move to the positive part of the talk. Mike Bishop’s and my work contends that our Ameliorative Psychology (partly prompted by Simon’s work on “bounded cognition”) is superior to SAE because it provides a motivated way of overcoming the is-ought divide. The normative recommendations and evaluative theses of Ameliorative Psychology can receive confirmation by the best science of the day. And some of these recommendations have been impressively confirmed, in the form of documented results and a proven method for securing them. Standard Analytic Epistemology, on the other hand, has a long tradition and the loyalty of its enthusiasts.

    J.D. Trout
    (Loyola University- Chicago)
    4:45-5:30 PM
    Decisions from Experience in Conditions of Uncertainty

    Hertwig, Barron, Weber and Erev (2004) initiated a stream of studies where aspects of prospects are not conveyed verbally to subjects (description) but subjects have to infer them from repeated observations (experience). This has lead to well-known violations of prospect theory. Hadar and Fox (2006) have argued that decisions from experience have been misclassified as decisions under risk and proposed to treat them as decisions under uncertainty. Therefore they have proposed to explain the data in experience by appealing to the 'two-stage model' of Fox and Tversky (1989) (while the usual decision weights of Prospect Theory I explain description). The model of ambiguity implements a form of Choquet expected utility in terms of non-additive event-decision weights (also known as Prospect Theory II). In this talk we report experimental results where both description and experience should be classified as decisions under uncertainty. Although it is very difficult (if possible at all) to find an experiential counterpart of a vague Ellsberg urn (for the two-color problem) we appeal to a chance set up based on double sampling determining an option that subjects perceive as being between 'clear' and 'vague' Ellsberg choices. This chance set-up is implementable in experience. Current results indicate that while subjects are ambiguity averse for gains in description (as Prospect Theory II predicts) the effect reverses under description where subjects are ambiguity seeking. Prospect Theory II seems unable to explain this effect. We conclude with some hypothesis about the cause of this new asymmetry between decisions form experience and description and by considering possible mathematical models of the asymmetry by appealing to well known techniques in the contemporary literature on imprecise probabilities.

    Work done in collaboration with Coty Gonzalez and Varun Dutt (CMU), and Jeff Helzner (Columbia).

    Horacio Arlo-Costa
    (Carnegie Mellon University)