Carnegie Mellon University

Lectures & Colloquia

Spring 2018

Thursday, January 25 -  Philosophy Colloquium
Julia Staffel, Washington University in St. Louis
Talk Title: How Do Beliefs Simplify Reasoning?
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract: It is commonly assumed that people need outright beliefs in addition to credences to simplify their reasoning. Outright beliefs simplify reasoning by allowing agents to ignore small error probabilities. What is outright believed can change between contexts. It has been claimed that thinkers manage shifts in their outright beliefs and credences across contexts by an updating procedure resembling conditionalization, which I call pseudo-conditionalization (PC). But conditionalization is notoriously complicated. The claim that agents manage their beliefs via PC is thus in tension with the view that the function of beliefs is to simplify our reasoning. I propose to resolve this puzzle by rejecting the view that agents manage their beliefs and credences by employing PC. Based on this solution, I furthermore argue for a descriptive and a normative claim. The descriptive claim is that, for outright belief to have its simplifying function, the strategies agents employ for managing beliefs and credences across contexts must inevitably lead to some incoherence in an agent’s attitudes in some contexts. By revealing this tradeoff between simplicity and coherence in reasoning, we gain a better understanding of why limited human reasoners fail to have ideally coherent doxastic states. Moreover, I argue that the view of outright belief as a simplifying heuristic is incompatible with the view that there are ideal norms of coherence or consistency governing outright beliefs that are too complicated for human thinkers to comply with.

Thursday, February 8Center for Ethics and Policy Colloquium
Robert Sparrow, Monash University
Talk Title: How robots have politics
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract: In an influential paper, published in 1980, Langdon Winner asked “Do artifacts have politics?” and concluded “yes!” In this presentation, which draws on my research on the ethics of autonomous vehicles, military robotics, sex robots, and aged care robots, I will explore how robots have politics. I will argue that the embodied and interactive nature of robots means that they have more politics than other sorts of artifacts. Robots have more — and more complex — “affordances” than other technologies. Robots will embody and reflect the intentions of their designers in ways that are very unlikely to be transparent to those who use or encounter them. The choices made by engineers will often have consequences for the options available to the users of robots and will in turn shape relationships between users and those around them. The power this grants designers is itself politically significant. Because, increasingly, robots will occupy the same environments as human beings, and play important social and economic roles in those environments, human-robot relations will become crucial sites of political contestation. The social policy choices necessary to realise the benefits of robots in many domains will inevitably also be political choices, with implications for relationships between stakeholders. Humanoid robots, and their behaviour, will have representational content, with implications for the ways in which people understand and treat each other. More generally, to the extent to which we anticipate that the introduction of widespread automation will produce a Fourth Industrial Revolution, it is vital that we ask who is making this revolution, as well as who will flourish — and who will suffer — if it occurs.

Thursday, February 15 Pure and Applied Logic Colloquium
André Joyal, The Université du Québec à Montréal (UQAM)
Talk Title: Three mutations of topos theory
4:30-5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract: The notion of topos was invented by Grothendieck in the early 60. A topos was regarded as a "generalised space". The notion has since mutated many times. The first mutation is the notion of elementary topos introduced by Bill Lawvere and Myles Tierney in the late 60; it is connecting topos theory with logic. A second mutation is the notion of higher topos introduced by Charles Rezk in the late 90; it is connecting topos theory with homotopy theory and higher category theory. The third mutation is the notion of higher elementary topos which is presently emerging under the impulse of homotopy type theory. There is a profound unity between these developments.

Thursday, March 22 Philosophy Colloquium
Willemien Kets, University of Oxford
Talk Title: Bounded Reasoning and Game-Theoretic Paradoxes
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract: Interactive epistemology is the study of how agents reason about other agents, including how other agents reason about others. The standard model assumes that agents are unbounded in their reasoning powers:they can reason about others' beliefs, about others' beliefs about others'beliefs, about others' beliefs about others' beliefs about others' beliefs, and so on, ad infinitum. I will show how, by introducing the arguably more realistic assumption that agents are bounded in their reasoning can help resolve a well-known paradox in game theory.

Thursday, April 5Center for Ethics and Policy Colloquium
Paul Scharre, Center for New American Security
Talk Title: Autonomous Weapons: Ethics and Policy
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

What happens when a Predator drone has as much autonomy as a self-driving car? Should machines be given the power to make life and death decisions in war? Would doing so cross a fundamental moral line? Militaries around the globe are racing to build increasingly autonomous systems, but a growing chorus of voices are raising alarm about the consequences of delegating lethal force decisions to machines.

Paul Scharre, Senior Fellow at the Center for a New American Security, is the author of the forthcoming book Army of None: Autonomous Weapons and the Future of War. He is a former Pentagon official who led the team that drafted the official Defense Department policy guidance on autonomous weapons, DoD Directive 3000.09. He is also a former Army Ranger who served multiple tours in Iraq and Afghanistan.

Thursday, April 12 - Pure and Applied Logic Colloquium
Greg Restall, The University of Melbourne
Talk Title: Isomorphisms in a Category of Propositions and Proofs
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract: In this talk, I show how a category of propositions and classical proofs can give rise to three different hyperintensional notions of sameness of content.One of these notions is very fine-grained, going so far as to distinguish p and p∧p, while identifying other distinct pairs of formulas, such as p∧q and q∧p; p and ¬¬p; or ¬(p∧q) and ¬p∨¬q. Another relation is more coarsely grained, and gives the same account of identity of content as equivalence in Angell’s logic of analytic containment. A third notion of sameness of content is defined, which is intermediate between Angell’s and Parry’s logics of analytic containment. Along the way we show how purely classical proof theory gives resources to define hyperintensional distinctions thought to be the domain of properly non-classical logics.

Thursday, April 26 – Philosophy Colloquium
Scott Weinstein, University of Pennsylvania
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Fall 2017

Thursday, September 14Philosophy Colloquium
Judith Degen, Stanford University
Talk Title: Informativeness in language production and comprehension
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract: In producing and comprehending language, speakers and listeners engage in rich inference processes that involve reasoning about the (often noisy) linguistic signal's literal meaning, world knowledge, and the context of utterance. A fully-fledged theory of linguistic meaning requires formally describing and analyzing these inference processes, which has proven challenging because of the complexity of establishing which information is used and how it is integrated. Rational theories of language use starting with Grice have attempted to bring some order to this difficult analytical situation by positing conversational principles that listeners expect cooperative speakers to abide by. In particular, speakers are expected to include just enough -- but not too much -- information in their utterances for listeners to correctly infer their intended meaning from the literal meaning of their utterance, world knowledge, and the context of utterance.

But such theories have come under attack from psycholinguistics: speakers have been shown to produce both over- and underinformative utterances. In this talk, I will demonstrate that the assumption of (boundedly) rational linguistic agents can not only be upheld but even be explicitly modeled computationally to great explanatory effect. Using a combination of behavioral lab-based and web-based experiments, corpus analyses, and computational modeling, I will show that speakers are best modeled as trading off the contextual informativeness and production cost of their utterances. Listeners in turn are best modeled as integrating their beliefs about such speakers with their prior beliefs about likely meanings via Bayesian inference. The modeled data come from two prima facie very different phenomena -- the production of overinformative referring expressions and the interpretation of underinformative scalar expressions.

I will conclude that, rather than being wastefully overinformative, speakers systematically add information to the extent that doing so reduces their uncertainty about whether the listener will correctly infer their intended meaning. Similarly, rather than being uselessly underinformative, speakers systematically provide less information because listeners can be relied on to make good use of the available contextual information. Together, these findings implicate a linguistic system geared towards communicative efficiency.

Thursday, October 26 - Center for Formal Epistemology Talk
Slobodan Perović, University of Belgrade, Serbia
Talk Title: How Theories of Induction Can Streamline Measurements of Scientific Performance
12:00 – 1:30pm, Baker Hall 150 *Note different location

Abstract: An inductive approach to the scientific reasoning process can streamline operational assessments of scientific performance, e.g. citation metric analysis, by determining whether the scientific domain at stake is inductively suitable for such assessment. This approach aims to identify a methodologically coherent scientific pursuit, which ensures that the citation metrics track internal inductive dynamics and efficiency of the reasoning process in the analyzed domain. We demonstrate inductive streamlining (drawing on Formal/Machine Learning Theory) in the cases of high energy physics experimentation and phylogenetics research. A general test defining basic internal inductive and external practical conditions can ensure epistemically transparent operational, citation-based, analysis of scientific networks.

Friday, October 27Pure and Applied Logic Colloquium
Dana Scott, Carnegie Mellon University (emeritus)
Talk Title: What is Explicit Mathematics?
4:30-6:00pm Baker Hall A53 – Steinberg Auditorium.
Reception follows at 6p in BH135.

Abstract: Beginning in about 1975, the late Solomon Feferman started writing about ways of making “real” mathematics more explicit. He was much influenced at the time by writers such as Georg Kreisel, Errett Bishop, John Myhill, Per Martin-Löf, William Tait, to name a few. He then continued up to the end of his life with students and colleagues to expand on his vision — especially in developing detailed proof-theoretic comparisons between various axiomatic systems.

The speaker would like to raise the question anew as to whether there are other useful ways of being explicit and, thus, finding what can be learned about the content of mathematical arguments.

Thursday, November 16Philosophy Colloquium
Richard Zach, University of Calgary
Talk Title: The Origins of Modern First-order Logic
4:30 – 5:45pm, Baker Hall A53 – Steinberg Auditorium

Abstract:  The model and proof theory of classical first-order logic are a staple of introductory logic courses: we have nice proof systems that show that the validities of FOL are computably enumerable, a well-understood notion of models, validity, and consequence, completeness, undecidability, and other meta-logical results, and even decision procedures for the propositional and monadic fragments. The story of how these were developed in the 1920s, 30s, and even 40s is also a staple of introductory courses, but usually consists in simply a list of results and who obtained them when. What happened behind the scenes is much less well known. The talk will fill in some of that back story and show how philosophical, methodological, and practical considerations shaped the development of the conceptual framework and the direction of research in these formative decades.