Carnegie Mellon University

Lectures & Colloquia


Fall 2016


Tuesday, September 20SDS/Philosophy Talk
Steve Alpern, The University of Warwick
Talk Title: Importance of Voting Order in Sequential Jury Voting
4:30-6:00pm, BH 150

Abstract: When heterogeneous jurors vote sequentially for a majority verdict, the reliability depends on voting order. For example when there are three jurors, the reliability (probability of a correct verdict) is highest when the juror of median ability votes first. This is true regardless of whether the jurors vote honestly (each one votes for the binary outcome which is most likely, given his private signal and previous voting) or strategically (to obtain highest reliability for the fixed voting order). Lesser result are discussed for larger juries.


Wednesday, September 21Pure and Applied Logic Talk
Bohua Zhan, Massachusetts Institute of Technology
Talk Title: Automation in Interactive Theorem Provers
4:00-5:00, GHC 4405

Abstract: AUTO2 is a new heuristic theorem prover written for the proof assistant Isabelle. It uses a robust search mechanism that bears some similarity to those used in SMT solvers. On the other hand, it retains several advantages of the tactics framework in Isabelle, such as allowing custom procedures and working with higher-order logic. In this talk, I will discuss the ideas behind auto2 and show some examples of its use in various parts of mathematics and computer science. In the end I will also discuss the more recent application to automation in separation logic.

Bio: I am a postdoc in the department of mathematics at MIT, currently working on automation techniques in interactive theorem provers. Previously I worked in low-dimensional topology, receiving my PhD in mathematics from Princeton, under the supervision of Zoltan Szabo.


Monday, September 26 - Philosophy Colloquium
Rachael Briggs, Stanford University
Talk Title: The Dominance Principle in Epistemic Decision Theory
3:30-5:20pm, DH 1212

Abstract: According to the Dominance Principle, one should not choose a dominated act--one that yields a worse outcome than some other act no matter what the state of the world. Similarly, many epistemic decision theorists hold an Epistemic Dominance Principle, which says that one should not adopt a dominated belief state --one that is less accurate than some other belief state no matter what the state of the world. The Epistemic Dominance Principle is useful for vindicating probabilism.

Recently, authors like Michael Caie and Richard Pettigrew have raised doubts about the epistemic version of the Dominance Principle. They argue that where a dominating belief state is unavailable or unchoiceworthy (in the right way), it cannot give us sufficient reason to reject the belief state it dominates.

I argue that the correct response to these challenges is to break the Dominance Principle into two parts: one that connects dominance to value comparisons, and another that connects value comparisons to choices. According this response, domination by an unavailable belief state is not a sufficient reason to reject a belief state, but domination by an unchoiceworthy belief state sometimes is.


Monday, October 10 - Philosophy Colloquium
Joe Halpern, Cornell University
Talk Title: Actual Causality: A Survey
3:30-5:20pm, DH 1212

Abstract:  What does it mean that an event C ''actually caused'' event E?
The problem of defining actual causation goes beyond mere philosophical speculation. For example, in many legal arguments, it is precisely what needs to be established in order to determine responsibility. (What exactly was the actual cause of the car accident or the medical problem?) The philosophy literature has been struggling with the problem of defining causality since the days of Hume, in the 1700s.

Many of the definitions have been couched in terms of counterfactuals.
(C is a cause of E if, had C not happened, then E would not have happened.) In 2001, Judea Pearl and I introduced a new definition of actual cause, using Pearl's notion of structural equations to model counterfactuals. The definition has been revised twice since then, extended to deal with notions like "responsibility" and "blame", and applied in databases and program verification. I survey the last 15 years of work here, including joint work with Judea Pearl, Hana Chockler, and Chris Hitchcock. The talk will be completely self-contained.


Monday, November 7Philosophy Colloquium
Hannah Rohde, Linguistics and English Language, University of Edinburgh
Talk Title: Ambiguity in natural language:  anticipation, coherence, and cost
3:30-5:20pm, DH 1212

Abstract: A growing body of work takes the perspective that language comprehension is driven in part by an ability to make predictions: Listeners who anticipate what message a speaker may be trying to convey and what words may be used to convey that message are better able to handle the ambiguity that pervades natural language. Anticipation implies a model of the speaker --- an ability on the part of the listener to estimate the probability of possible upcoming messages (and the associated probabilities of particular sounds, words, and structures that a speaker could use to realize those messages).

Psycholinguistic studies on expectation-driven processing confirm that listeners generate expectations at phonetic, lexical, and syntactic levels.  Recent studies of my own point to listener expectations at the discourse level as well.  In this talk I will discuss several interwoven threads from my research: (1) a novel methodology for measuring discourse expectations, (2) the impact of discourse expectations on a number of other linguistic phenomena, and (3) a Bayesian approach to modeling what listeners do when they are guessing what a speaker is going to say next.

The bulk of the talk addresses comprehenders' expectations about the coherence relation that is likely to hold between one clause and the next---both the challenge of measuring anticipation of an abstract linguistic dependency and the repercussions such anticipation can have.  I illustrate how coherence relations are brought to bear on a range of phenomena, all of which can introduce ambiguity in natural language:  connectives, syntactic attachment, presupposition projection, and pronoun interpretation.  The resolution of ambiguity in these cases is shown to reflect comprehenders' understanding of the speaker's likely message in context.

The last part of the talk addresses an apparent puzzle that arises if one focuses on the predictability of speaker messages at the exclusion of the surface forms that speakers choose amongst for conveying their intended message.  The puzzle stems from the observation that predictable messages are not always realized with reduced forms, as one might expect under an information theoretic model of language use.  Rather, the production of reduced forms---here, pronouns---is shown to reflect properties of syntactic structure and information structure (prominence as indexed by subjecthood/topichood, not probability of next mention) as well as the cost of unambiguous formulations.  What emerges is a picture of comprehension as a task of reverse engineering.  Listeners anticipate what a speaker will say and update those expectations when a new surface form is encountered that might be symptomatic of an alternative message, a state of affairs that is well captured by Bayesian updating.


Monday, November 21 - Philosophy Colloquium
Stephanie Dick, Harvard Society of Fellows, Harvard University
Talk Title: Reproving Principia: Reasoning and Computing in the Postwar United States
3:30-5:20pm, DH 1212

Abstract: "Computers ought to produce in the long run some fundamental change in the nature of all mathematical activity.” These words, penned in 1958, capture the motivation behind an early field of computing research called Automated Theorem-Proving. Practitioners of this field sought to program computers to prove mathematical theorems or to assist human users in doing so. Everyone working in the field agreed that computers had the potential to make novel contributions to the production of mathematical knowledge. They disagreed about almost everything else. Automated theorem-proving practitioners subscribed to complicated and conflicting visions of what ought to count and not count as a mathematical proof. There was also disagreement about the character of human mathematical faculties - like intuition, understanding, and reasoning - and how much the computer could be made to possess them, if at all. Different practitioners also subscribed to quite different imaginations of the computer itself, its limitations and possibilities. Some imagined computers as mere plodding “slaves” who would take over tedious and mechanical elements of mathematical research. Others imagined them more generously as “mentors” or “collaborators” that could offer novel insight and direction to human mathematicians. Still others believed that computers would eventually become autonomous agents of mathematical research. Automated Theorem-Proving practitioners took their visions of mathematicians, minds, computers, and proof, and built them right in to their theorem-proving programs. Their efforts did indeed precipitate transformations in the character of mathematical activity but in varied and often surprising ways. With a focus on communities based in the United States in the second half of the twentieth century, this talk will introduce different visions of the computer as a mathematical agent, software that was crafted to animate those imaginings, and the novel practices and materialities of mathematical knowledge-making that emerged in tandem.


Monday, November 28 Philosophy Colloquium
Harvey Lederman, University of Pittsburgh
Talk Title: Verbalism (with Jeremy Goodman)
3:30-5:20pm, DH 1212

Abstract: This paper starts from the observation that attitude reports are context-sensitive in a dimension related to Frege's puzzle, and suggest that the attitude verbs are responsible for the relevant dimension of context-sensitivity. This ''verbalist'' view, which validates a strong form of Leibniz's law, has been suggested in a number of places before, but it has not been spelled out systematically. We show for the first time that verbalism is consistent with a strong logic of belief. This raises the question of whether, more generally, verbalism is consistent with strong laws of propositional attitude psychology. We present a series of limitative results about what laws of propositional attitude psychology are consistent with verbalism. These limitative results seem to be bad news for verbalism. But in fact, we argue, the bad news affects many more views than just verbalism: a range of approaches Frege's puzzle are forced to abandon attractive principles of propositional attitude psychology. We conclude that in spite of its surprising consequences, verbalism is a serious contender among approaches to Frege's puzzle.


Spring 2017


Monday, February 13 - Causation/Machine Learning Colloquium
Danielle Bassett, University of Pennsylvania
Talk Title: The Network Architecture of Human Thought
3:30-5:20pm, DH 1212

Abstract: Human thought is predicated on a complex architecture of interconnections that enable information transmission between distinct areas of the brain. Yet gaining a fundamental understanding of this architecture has remained challenging, largely due to insufficiencies in traditional imaging techniques and analytical tools. In concerted efforts to address these challenges, neuroscientists have begun to combine recent breakthroughs in non-invasive brain imaging techniques with the conceptual notions and mathematical tools of network science – leading to the emerging field of network neuroscience. This talk will highlight early successes in this field leading to fundamental understanding of healthy human thought, its development over childhood, and its alteration in psychiatric disease and neurological disorders. The talk will close by commenting on current frontiers and future potential in health care, business, and education sectors.


Thursday, February 23 - Department of Philosophy and the ULS
Eugenia Cheng, Scientist in Residence at the School of the Art Institute of Chicago
Honorary Fellow of the University of Sheffield and Honorary Visiting Fellow of City University, London
Talk Title: How to Bake PI: Mathematics Made Tasty
4:30-6:00pm, Porter Hall 100, Gregg Hall Auditorium

Abstract: Mathematics can be tasty! It’s a way of thinking, and not just about numbers. Through unexpectedly connected examples from music, juggling, and baking, I will show that math can be made fun and intriguing for all, through hands-on activities, examples that everyone can relate to, and funny stories. I'll present surprisingly high-level mathematics, including some advanced abstract algebra usually only seen by math majors and graduate students. There will be a distinct emphasis on edible examples.


Monday, February 27 - Program for Deliberative Democracy and the Center for Ethics and Policy
Alexander Heffner, Host of Open Mind on PBS
Talk Title: The Future of Civil Discourse
4:30-6:00pm, Porter Hall 100, Gregg Hall Auditorium

Abstract: This lecture will navigate three areas of interest across civic life: (1) the Millennial citizen (2) the space of old and new media, and (3) the character of our political discourse.  From the formation of broadcasting to the emergence of social media, I will consider a blueprint for "civic press.”  We’ll grapple with questions for the new (45th) U.S. president and think about fresh ways young people can frame public policy, while reflecting on the 2016 campaign and how to improve the political process.

Alexander Heffner is the host of The Open Mind on PBS. He has covered American politics, civic life and Millennials since the 2008 presidential campaign. His work has been profiled in VarietyMediumLos Angeles TimesThe Washington PostThe Christian Science MonitorThe Philadelphia Inquirer, and on NBC NewsC-SPANNY1HuffPost Live and BBC, among other media outlets. His essays, reviews and op-eds have appeared in Reuters, RealClearPoliticsThe New York TimesThe Wall Street Journal, The Boston Globe, and The Root, among other publications.


Monday, March 20 - Ethics Colloquium
Kok-Chor Tan, University of Pennsylvania
Talk Title: International Territorial Right: An Institutional Account
3:30-5:20pm, DH 1212

Abstract: A state’s territorial right has two dimensions. There is the local dimension, which pertains to a state’s jurisdictional authority. This is the right of the state to subject individuals within its territory to its laws. The other is the international dimension, which has to do with the right of a state to a specific geographical space (within which it gets to exercise its jurisdictional authority) that other states have to acknowledge and respect. I will call this the international territorial right of states. Recent theories of territoriality mostly hold that a state’s international right flows from its jurisdictional authority. That is, these theories take it that when a state is justified in exercising authority over individuals within a given territory, other states come under an obligation to respect its exclusive claim to that territory. But I suggest that this privileging of the local dimension over the international gets the reasoning backwards. To the contrary, a state must first have an acknowledged international right to a territory before it can have an exclusive dominion in which to exert its jurisdictional right. More substantively, I argue that this international right is not a pre-institutional right, but a right that is based in international convention or institutions. That the international order is institutional in this very fundamental way has implications for global justice. Among other things, it will instigate a more cosmopolitan understanding of egalitarian justice and immigration.


Monday, March 27 - Pure and Applied Logic Colloquium
Pieter Hofstra (University of Ottawa)
Visiting Associate Professor, Carnegie Mellon University Philosophy Department
Talk Title: Games and Categories
3:30pm-5:20pm, DH 1212

Abstract: The interaction between game theory on the one hand and category theory and logic on the other has historically been rather one-directional. Most importantly, categorical logicians have considered various categories of games with the purpose of obtaining models of certain fragments of logic. In this context, the games under consideration are of a restricted, combinatorial nature, in order to correspond to appropriate statements in the logic under investigation. For example, they are always 2-player games, and various concepts central to classical game theory such as mixed strategies are typically not considered. Moreover, the morphisms are typically defined to be strategies in an ''exponential” game, and don’t arise as structure-preserving mappings of any kind.

In this talk I will present a categorical outlook on classical game theory which aims to be much more inclusive, and not at all driven by considerations of logic. The purpose is to identify a categorical setting where games and various operations on games can be naturally identified with well-known categorical constructions, in order to extract the structural ingredients necessary for the main developments to go through. One of the technical ingredients is a hybrid definition of games (encompassing sequential and simultaneous games) in terms of suitable families of games.

The talk assumes very little knowledge of category theory or game theory, and will mostly focus on an elementary presentation of the basic categorical setting, as well as a sketch of the form taken in this setting by some key concepts from the classical theory of games.


Monday, April 10 - Philosophy Colloquium
Michael Rescorla, University of California, Los Angeles
Talk Title: On the Proper Formulation of Conditionalization
3:30-5:20pm, DH 1212

Abstract: Conditionalization is a norm that governs the rational reallocation of credence. I distinguish between factive and non-factive formulations of Conditionalization. Factive formulations assume that the conditioning proposition is true. Non-factive formulations allow that the conditioning proposition may be false. I argue that non-factive formulations provide a better foundation for philosophical and scientific applications of Bayesian decision theory. I furthermore argue that previous formulations of Conditionalization, factive and non-factive alike, have almost universally ignored, downplayed, or mishandled a crucial causal aspect of Conditionalization. To formulate Conditionalization adequately, one must explicitly address the causal structure of the transition from old credences to new credences. I offer a formulation of Conditionalization that takes these considerations into account, and I compare my preferred formulation with some prominent formulations found in the literature.


Monday, April 24 - Pure and Applied Logic Colloquium
Thierry Coquand, University of Gothenburg
CANCELED