# Lectures & Colloquia

## Fall 2018

**Thursday, October 11** – *Center for Ethics and Policy Colloquium*

Bertram Malle, Brown University**Talk Title:** When Do and Should People Trust Robots?

4:30-5:45 pm, Baker Hall A53 Steinberg Auditorium

**Abstract:** The empirical literature on human trust in artificial agents such as robots is perplexing. People seem to overtrust such agents in some circumstances and undertrust them in other circumstances. Moreover, what trust is and how it is measured shows a great deal of variability. To help advance our knowledge in this domain I offer two proposals. First, I argue that trust is multi-dimensional and that humans can have familiar kinds of trust in a robot (i.e., in its reliability and capacity) but that the more interesting kinds of trust are of a moral kind (i.e., sincerity and ethical integrity). I show that these distinct dimensions of trust can be reliably measured and thus offer a fresh start in understanding when people will trust robots and other artificial agents. Second, if some dimensions of trust involve moral capacities, then we need to ask if and how robots can have moral capacities. To this end, I offer theoretical arguments and empirical evidence to propose that moral competence consists primarily of a massive web of norms, decisions in light of these norms, judgments when such norms are violated, and a vocabulary to communicate about these norm violations. I argue that future robots can in principle exhibit these capacities, and if they do so reliably, they will deserve human trust.

**Thursday, October 18 **- * Philosophy Colloquium*

Sonja Smets, Institute for Logic, Language and Computation, University of Amsterdam**Talk Title: **Logical Dynamics in Social Networks - The flow of information is what drives our information society of interconnected agents capable of reasoning, communicating and learning.

4:30–5:45 pm, Baker Hall A53 – Steinberg Auditorium

**Abstract:** In this context, we are interested in the logical study of how information flows in social networks by focusing on the spread of behaviors, ideas and the adoption of social norms. To model these diffusion processes as well as the long-term informational evolution of social networks, we make use of the tools of Dynamic Epistemic Logic (DEL). The logic DEL, originally designed to model the epistemic and doxastic states of agents, their interaction and change, has recently been enriched with a social dimension and can be applied to the modeling of various social phenomena including social influence and herd behavior. In our setting, we first consider agents who adopt a new fashion or behavior depending on whether they know that a “strong enough group” of their neighbors already has adopted. We provide different types of models as well as a simple qualitative modal language to reason about the concept of a “strong enough” trigger of influence. When we extend our logic with fixed-point operators, important results from network theory about the characterization of cascades follow immediately as a straightforward consequence of the basic axioms. Unfolding the influence dynamics in an epistemic social network allows us to characterize the epistemic conditions under which the dynamic process can speed up or slow down. This presentation is based on joint work with A. Baltag at the University of Amsterdam.

**Thursday, October 25** – *Pure and Applied Logic Colloquium*

Liron Cohen, Cornell University**Talk Title: **Enhancing the Proofs-as-Programs Paradigm with Modern Notions of Computation and Reasoning Techniques

4:30-5:45 pm, Baker Hall A53 - Steinberg Auditorium

**Abstract: **The proofs-as-programs paradigm which establishes a correspondence between formal proofs and computer programs has made a tremendous impact on the world of computing, enabling various high-value applications in different areas of computer science. However, while both proof theory and programming languages have evolved significantly over the past years, the cross-fertilization of the independent new developments in each of these fields has yet to be explored in the context of the paradigm. This naturally gives rise to the following questions: how can modern notions of computation influence and contribute to formal foundations, and how can modern reasoning techniques improve the way we design and reason about programs? In this talk, we focus on the first question and demonstrate how by using programming principles that go beyond the standard lambda-calculus, namely state and non-determinism, it is possible to provide new insights into foundational mathematical concepts, namely free choice sequences and the Axiom of Choice.

**Thursday, November 1 **- * Philosophy Colloquium*

Larry Moss, Indiana University at Bloomington**Talk Title: **Natural Logic

4:30–5:45 pm, Baker Hall A53 – Steinberg Auditorium

**Abstract: **Much of modern logic originates in work on the foundations of mathematics. My talk reports on work in logic that has a different goal, the study of inference in language. This study leads to what I will call "natural logic", the enterprise of studying logical inference in languages that look more like natural language than standard logical systems. I will sketch the history of this field, and I also will try to make as many connections as possible to work by the CMU community, broadly considered. For example, we have computer programs which can carry our small but significant entailment tasks on language "in the wild", and this work calls on syntax (categorial grammar, but extended), semantics (typed lambda calculus, again extended), logic, and algorithms. We also have new tools for teaching basic logic that come from this area.

The talk should appeal to mathematical logicians interested in completeness and complexity results, including ones for logical systems that are not first-order; philosophers of logic curious about syllogistic reasoning and its many modern extensions, and also about taking inference seriously in the foundations of semantics; and computer scientists working in natural language processing and especially in textual entailment.

**Thursday, November 15 **- * Philosophy Colloquium*

Iris van Rooij, Radboud University**Talk Title:** Can heuristics make hard work light? Ecological rationality and intractability

4:30–5:45 pm, Baker Hall A53 – Steinberg Auditorium

**Abstract:** Ecological rationality and intractability

Classical accounts of rationality, based on logic and probability theory, have been criticized for assuming demonic computational powers far beyond the capacity of mortals and machines. According to these accounts, rational minds must have the capacity for solving intractable (NP-hard) problems, for which no tractable algorithms exist. On an alternative account, the mind’s adaptive toolbox consists of fast and frugal heuristics and rationality is to be understood as the fit between these heuristics and the environment, called ‘ecological rationality’. It has been tacitly assumed that ecological rationality is tractable. However, as I will demonstrate in this talk, ecological rationality presents minds (or nature) with the same kind of intractable problems as classical accounts of rationality. This wrinkle may be ironed out, but doing so seems to require an extension of the heuristics research program to understand the tractability of adapting toolboxes of heuristics.

## Spring 2019

**Thursday, February 28 - ***Philosophy Colloquium*

Hannah Rubin, University of Notre Dame

4:30-5:45 pm, Baker Hall A53 Steinberg Auditorium

**Abstract:** While there are many important similarities between evolution in biology and in economics, we should be careful when importing ideas from one evolutionary context to the other. This talk will discuss a case where lack of carefulness has been especially problematic. In particular, I will argue that bringing in ideas from economics (i.e. treating organisms as agents) in explaining the concept of relatedness (as how much an organism ‘values’ its social partner) has led to two major problems within inclusive fitness theory. First, thinking of relatedness as how much an organism cares about its social partner perpetuates reliance on an unreliable heuristic method of estimating inclusive fitness, often called the “simple weighted sum”. Second, thinking of relatedness in this way has led to erroneous claims that inclusive fitness fills an essential role in evolutionary theory, in allowing us to view organisms as ‘trying’ to maximize their fitness.

**Thursday, March 7 - ***Philosophy Alumni Colloquium*

Chris Meek, Microsoft**Talk title: **Interactive Machine Learning

4:30-5:45 pm, Baker Hall A53 Steinberg Auditorium

**Abstract:** Artificial Intelligence researchers aim at infusing systems with intelligent behaviors. A popular and successful approach to accomplishing this goal is using interactive machine learning, a process in which a person uses machine learning to compile knowledge into useful artifacts. In this talk, we describe an alternative perspective on interactive machine learning including teaching and programming perspectives. These alternative perspectives highlight important research questions which have not received adequate attention. In this context, we briefly describe two new results on interactive machine learning. In the first, we demonstrate that a person can exponentially reduce the required effort to build an intelligent system by providing knowledge beyond labels, and, in the second, we leverage information derived from the interactive process to improve the predictive quality of the resulting artifacts.

**Thursday, March 28** – *Philosophy Colloquium*

Wayne Wu, The Center for the Neural Basis of Cognition, Carnegie Mellon University**Talk title: **Attention and Awareness: Empirical and Philosophical Perspectives

4:30-6:00 pm, Baker Hall A53 Steinberg Auditorium

**Abstract: **In this talk, I begin with a review of my past work on attention and agency which merges philosophical and empirical perspectives into a unified theory. I then discuss one new line of research that focuses on the use of introspection in cognitive science to provide premises in an argument for “unconscious seeing” in phenomena such as blindsight and visual agnosia. First, I deploy the unified theory of attention and agency to provide a psychological explanation of introspective behavior. I then use these results to show that introspective data used in neuropsychology to argue for unconscious vision are not reliable. The case for unconscious vision, though widely accepted, is surprisingly weak. I close with some suggestions about how to reorient work on consciousness as a biological phenomenon so as to engage both philosophical and empirical concerns.

**Thursday, April 4** – *Philosophy Alumni Colloquium*

Savitar Sundaresan, Imperial College London**Talk title: **Information Theory in Finance

4:30-5:45 pm, Baker Hall A53 Steinberg Auditorium

**Abstract:** This talk aims to provide an overview of how information collection and usage is modeled in finance. I will start with a small history of the field, beginning with the relatively concise treatments in papers by Joseph Stiglitz and continuing through the adoption of tools from information theory in the early 2000s by Chris Sims and Michael Woodford and the development of the field of rational inattention and neuroeconomics. I will then discuss some of the topics I currently work on, such as dynamic information collection, and the informational content of prices in financial markets.

**Thursday, April 18** – *Philosophy Colloquium*

Marie Amalric, Department of Psychology, Carnegie Mellon University**Talk title:** Brain mechanisms involved in the learning and processing of high-level mathematical concepts

4:30-5:45 pm, Baker Hall A53 Steinberg Auditorium

**Abstract: **The human brain is especially unique within the animal kingdom in its understanding of abstract mathematical concepts. We are able to conceive irrational numbers, idealized geometrical shapes, abstract topological properties, etc. without ever perceiving them. How, then, do such concepts develop in the human mind? My research program has begun to answer this question by investigating the mechanisms and the neural basis underlying the learning and manipulation of mathematical concepts. While previous work mainly focused on arithmetic processing, my work focuses on more advanced mathematical knowledge, that gives a better account for the diversity of mathematical activities (analysis, algebra, topology, geometry…) than simple arithmetic.

My work seeks to evaluate the relation of advanced mathematical thinking with language on the one hand, and visuospatial processes on the other hand. After presenting an overview of my research program, I will more specifically present the results of four fMRI and one behavioral studies that involved professional mathematicians (including the exceptional case of three blind mathematicians), Munduruku adults (from the Amazon) and 5-years-old French children. These studies have shown that (1) advanced mathematical reflection on concepts mastered for many years does not recruit the brain circuits for language; (2) mathematical activity systematically involves number- and space-related brain regions, regardless of mathematical domain, problem difficulty, and participants' visual experience; (3) non-verbal acquisition of geometrical rules relies on a language of thought that is independent of natural spoken language.

**Thursday, April 25 - ***Philosophy Colloquium*

Miriam Schoenfield, Massachusetts Institute of Technology**Talk title: **Accuracy and Verisimilitude: The Good, The Bad and the The Ugly

4:30-5:45 pm, Baker Hall A53 Steinberg Auditorium

**Abstract:** It seems like we care about at least two features of our credence function: accuracy (high credences in truths, low credences in falsehoods) and verisimilitude (investing higher credence in worlds that are more similar to the actual world). Accuracy-first epistemology requires that we care about one feature of our credence function: accuracy. So if you want to be a verisimilitude-valuing accuracy-firster, you must be able to think of the value verisimilitude as somehow built into the value of accuracy. Can this be done? In a recent paper, Graham Oddie has argued that it cannot, at least if we want the accuracy measure to be proper. I argue that it can.