Saturday, December 5, 2015 - Carnegie Mellon, Baker Hall, Dean’s Conference Room, 154R
A variety of phenomena have motivated researchers to distinguish between different types of linguistic content. One classical distinction is that made by Austin (1962) and Searle (1969) between the propositional content of utterances and their speech act force. Another classical distinction is that between assertoric and presupposed content (Frege 1893, Strawson 1950, Stalnaker 1974, inter alia). In recent years, a new distinction between at-issue and not at-issue content (Potts 2005, Simons et al. 2010) has been introduced, to some extent offered as a replacement for the asserted/presupposed distinction. One empirical domain where the at-issue/not at-issue distinction has been utilized by some researchers is in the study of evidentials, a category of linguistic forms which provide information about the speaker’s evidential relation to the (remaining) content of her utterance.
This one day workshop will bring together researchers with intersecting work on the nature of these distinctions, on the empirical evidence for them, and on how to model them.
The Fifteenth conference on Theoretical Aspects of Rationality and Knowledge was held at Carnegie Mellon University from June 4-6, 2015.
Workshop on Epistemology, Logic, and Games
December 3, 2014, at Baker Hall 150, Carnegie Mellon University
Joseph Y. Halpern: Game Theory With Translucent Players
Abstract: A traditional assumption in game theory is that players are opaque to one another - if a player changes strategies, then this change in strategies does not affect the choice of other players' strategies. In many situations this is an unrealistic assumption. We develop a framework for reasoning about games where the players may be translucent to one another; in particular, a player may believe that if she were to change strategies, then the other player would also change strategies. I show that by assuming translucent players, we can recover many of the regularities observed in human behavior in well-studied games such as Prisoner's Dilemma, Traveler's Dilemma, Bertrand Competition, and the Public Goods game. I then consider solution concepts appropriate for translucent players. I characterize the analogues of rationalizability and Nash equilibrium for translucent players. The former is de ned in terms of an analogue of common belief of rationality: Common Counterfactual Belief of Rationality (CCBR) holds if (1) everyone is rational, (2) everyone counterfactually believes that everyone else would still be rational even if i were to switch strategies), (3) everyone counterfactually believes that everyone else is rational, and counterfactually believes that everyone else is rational, and so on. CCBR characterizes the set of strategies surviving iterated removal of minimax-dominated strategies, where a strategy s for playeriis minimax dominated bys'if the worst-case payoff foriusings'is better than the best possible payoff usings.
This first part of the talk represents joint work with Valerio Capraro; the second part represents joint work with Rafael Pass.
Abstract:Type spaces were introduced by John Harsanyi as a formal mechanism for modeling games of incomplete information. In particular, in a Bayesian game types encode payoff-relevant information, a typical example being how each participant values the items in an auction. On the other hand, type spaces have also been co-opted for the epistemic analysis of games of complete information: in this context, types serve as modeling tools that provide a succinct representation of the players' hierarchical beliefs.
One might wonder whether these two applications can work in combination: in a Bayesian game, can the players' (hierarchical) beliefs themselves count as payoff-relevant characteristics? The answer to this question is no for a wide class of beliefs (specifically, all beliefs about strategies); however, we show that by generalizing the classical setting to distinguish between two notions of strategy - what we call "intended" versus "actual" strategies - this limitation can be circumvented. The resulting class of models is flexible enough to capture psychological games (in which preferences can depend on feelings like guilt or surprise), and provides a natural setting in which to endogenize the reference point that figures centrally in prospect theory. Moreover, under the plausible assumption that, in equilibrium, intended and actual strategies line up, we show that equilibria do not exist in general, and establish conditions for existence in terms of the richness of the associated type space.
Abstract: We propose an integrated theoretical framework that captures the diverse motives driving the preference to obtain or avoid information. Beyond the conventional desire for information as an input to decision making, people are driven by curiosity, which is a desire for knowledge for its own sake, even in the absence of material benefits, and people are additionally motivated to seek out information about issues they like thinking about and avoid information about issues they do not like thinking about (an ostrich effect). The standard economic framework is enriched with the insights that knowledge has valence, that ceteris paribus people want to fill in information gaps, and that, beyond contributing to knowledge, information affects the focus of attention. We then apply our model to the domain of decision making under uncertainty. An uncertain prospect exposes an individual to an information gap. Gambling makes the missing information more important, attracting more attention to the information gap. To the extent that the uncertainty (or other circumstances) makes the information gap unpleasant to think about, an individual tends to be averse to risk and ambiguity. Yet when an information gap happens to be pleasant, an individual may seek gambles providing exposure to it. The model provides explanations for source preference regarding uncertainty, the comparative ignorance effect under conditions of ambiguity, aversion to compound risk, and more.
Kevin T. Kelly: A Learning Semantics for Inductive Knowledge
Abstract: Possible world semantics is supposed to be non-committal about the nature of knowledge - the accessible worlds are possible for all one knows. But as soon as one adds operators for incoming information (e.g., public announcement), the accessible worlds are possible for all one has been informed, which implies inductive skepticism - the view that one can know nothing that extends one's current information. Possible world semantics also implies, notoriously, that knowledge is consistent and closed under logical consequence. I will present a semantics for inductive knowledge, in which time and learning in response to new information are represented explicitly in terms of Turing machines that process sequences of inputs. Based on the semantics, I will explain (i) how S4 is right and S5 is wrong, (ii) how inductive knowledge is close-able (but not closed) under deductive consequence, (iii) how one can know that p and not know that q, even though p and q are logically equivalent, (iv) how it is possible to convey inductive knowledge (rather than mere, true belief), and (v) how a group can come to common knowledge of rationality (or irrationality) just by watching one another's play in ever-longer centipede games.
Workshop on Simplicity and Causal Discovery
June 6-8, 2014, at Carnegie Mellon University
Rationale: Correlation does not imply causation---earthquakes are correlated with structural cracks, but filling the cracks does not prevent earthquakes. However, Patterns of non-experimental, empirical dependence can point toward causation and modern computing power can be harnessed to search such patterns for networks of causal relations. The idea goes back to Spearman, but the past three decades have seen a proliferation of promising, alternative approaches, based on Tetrad equations, the i-map order, independent component analysis, and information theory. One common theme that runs through all of these approaches is a heavy reliance on Ockham's razor, the characteristic scientific bias toward simpler, more unified, or more explanatory theories. That raises foundational questions. How is Ockhams razor applied in causal inference? What is the underlying notion of simplicity? What justifies the assumption that the causal truth is simple? The aim of this workshop is to bring together top researchers in the area for a candid, foundational discussion of those and related questions.
The workshop is sponsored by a generous grant from the John Templeton Foundation.
Third CSLI Workshop on Logic, Rationality and Interaction
May 31-June 1, 2014, Cordura Hall, CSLI, Stanford University
The CFE is co-sponsoring the Third CSLI Workshop on Logic, Rationality and Interaction.