Carnegie Mellon University
2013-2014

2013-2014 Lectures & Colloquia

Fall


September 5, 2013, Philosophy Colloquium

Hans Halvorson (Princeton University) and James Weatherall (UC Irvine)
What is a Scientific Theory? or, How Might You Best Formalize a Scientific Theory, Were You Inclined to Do So?

Abstract: We make a proposal, inspired by recent work in categorical logic, for what kind of mathematical object might be used to represent a scientific theory, for the purposes of addressing certain kinds of conceptual and foundational questions. We then show how the proposal might be used in practice by exploring a handful of examples from physics.


September 6, 2013, Philosophy of Physics Workshop

Hans Halvorson (Princeton University) and James Weatherall (UC Irvine)

Abstract: This event will be based on a discussion of the Thursday talk, with presentation of additional material.


September 19, 2013, Philosophy Colloquium

Berit Brogaard, University of Missouri - St. Louis
Color and Cognitive Penetrability

Abstract: Though the modularity of mind hypothesis has been called into question a long time go, a version of it lingers for the case of color. Empirical evidence, however, suggests that even color processing is not immune to top-down influences from cognitive systems. Here I argue that there are several ways that one can understand this cognitive penetrability hypothesis. On two interpretations, it makes a claim about the cognitive processing of determinate colors and thus turns out to be uncontentious. On a third interpretation, it makes a claim about low-level contrast and indeterminate color processing. There is no reason to think that the low-level processing of color is subject to top-down influences from cognitive systems but the properties computed by these systems also do not play any significant role in color judgments and discrimination tasks.


October 3, 2013, Philosophy Colloquium

Peter Godfrey-Smith, CUNY Graduate Center
Memory as Communication

Abstract: Generalization of the Lewis model of communication by Skyrms and others has shown its great breadth of application. I'll look at the idea of treating memory as communication over time, communication between stages or time-slices of a self. This can inform debates about the 'reconstructive' nature of memory, the function of episodic memory, and the relations between memory in genetic, epigenetic, and neural systems.


October 10, 2013, Philosophy Colloquium

Dana Scott, University Professor Emeritus, Carnegie Mellon University
Visiting Scholar in Mathematics, UC Berkeley
Stochastic Lambda Calculi

Abstract: Models for lambda calculus can be given as topological spaces of simple kinds. All such allow for the adjunction of random variables in a standard way. The lambda calculus can be viewed as a basic programming language for recursive functions, and a random variable can be regarded as a probabilistic oracle. In that way, the lambda calculus (with a few extra combinators) becomes a programming language for probabilistic computation. The author wishes to ask for help in finding interesting examples to be set out in this formulation.


October 11, 2013, Special Lecture in Honor of Dana Scott's 81st Birthday

Title: "What is Logical Truth?"
Speaker: C. McCarty
Reception: 4:00-4:25 GHC 6115, Talk: 4:30-6:00 GHC 6115

Abstract: We use metamathematical tools to compare and evaluate two familiar, but strikingly different definitions of the notion "logical truth," one from Hilbert and quantifying over sentences and their substitutions, and one from Russell and quantifying over truth-values. On the basis of an exploration of logical truths both classical and intuitionistic, we come to favor the Russellian definition over that of Hilbert.

November 7, 2013, Pure and Applied Logic Colloquium

Ulrik Buchholtz, Stanford University
The unfolding of systems of inductive definitions

Abstract: The unfolding of a schematic system S, as defined by Feferman, seeks to answer the question: Given S, which operations and predicates, and which principles concerning them, ought to be accepted if one has accepted S? I will relate the results of my investigations into the proof theory of the unfolding of systems of positive inductive definitions, in particular ID1, the schematic system of one arithmetical inductive definition. The proof-theoretic ordinal arises as the collapse of the first strongly critical ordinal after the first uncountable ordinal. The techniques used are proof-theory of admissible set theory and asymmetric interpretation.

I will motivate the formal theories for unfoldings, situate the results in a broader ordinal-proof-theoretic context, and then sketch the proofs of the main results.


November 14, 2013, Philosophy Colloquium

Marc Fleurbaey, Princeton University
Risk and potential people

Abstract: Suppose that you are a morally motivated stranger, who, under conditions of risk, must make choices that will influence the well-being, and, on occasion, the identities of the people who will exist - though never the number of people who exist. How ought you to choose? This paper addresses this question.


November 21, 2013, Philosophy Colloquium

Juliet Floyd, Boston University
The Sheffer Box

Abstract: In May 2012, with the help of Bernard Linsky, I discovered a missing box of original documents and research materials pertaining to the work of H.M. Sheffer (1882-1964), the logician known for his strokes. The box had been assembled and saved by Burton Dreben, with whom I worked on Sheffer in 1988. This talk will report on my assembly and analysis of these primary resource materials.

Sheffer is of interest for our understanding of the reception of logic in the United States: the work of Frege, Russell, Zermelo, the early Wittgenstein, and others. A student of Royce, William James, Huntington, and Russell, he made some contributions to the algebra of logic tradition, though his own "general theory of notational relativity" never reached mathematical fruition. He influenced many, including C.H. Langford. In addition, however, the philosophical strands of his thinking form an interesting comparison and contrast to those of his contemporaries Wittgenstein and C.I. Lewis, on whom he lectured at Harvard.

Sheffer took down the only extant notes of Bertrand Russell's 1910 lectures at Cambridge University, and these are among the materials I shall discuss. Other materials in the box include correspondence with Russell in which Sheffer describes meeting with Frege, Peano, Hadamard and others in 1911; notes of Sheffer’s spring 1922 logic lectures at Harvard, taken down by Marvin Farber; Farber's correspondence with Sheffer concerning his work with Zermelo in 1923, as well as other lecture notes of relevance to our understanding of the history and presentation of logic in the United States the 1910s and 1920s.


December 5, 2013, Pure and Applied Logic Colloquium

Michael Ernst, UC Irvine
A Paradox in Naive Category Theory: Answering Feferman's Question

Abstract: I consider Solomon Feferman's ongoing search for a foundation for what he calls 'unlimited category theory'. I derive a paradox in naive category theory using only assumptions that fall under Feferman's foundational requirements. This has important consequences for the possibility of categories containing all objects of a given type.

Spring


January 20, 2014, Philosophy Colloquium

Jonathan Birch, Christ's College Cambridge
Gene Mobility and the Concept of Relatedness

Abstract: Our best current theories of the evolution of biological altruism were built with multicellular animals in mind. What happens when we apply these theories to the evolution of altruism in the microbial world? In particular, what happens to the concepts of inclusive fitness and genetic relatedness, given the propensity of microbes to trade genes 'horizontally' with one another? I use simple models (based on the Price equation) to address these questions. I argue that, for the purposes of inclusive fitness theory, the right concept of relatedness in microbial populations is diachronic, and measures the genetic similarity between the producers of fitness benefits at the time they produce them and the recipients of those benefits at the end of their life-cycle. With this revised concept of relatedness in hand, we can see how altruism can evolve even when there is no assortment among bearers of the genes for altruism at the time they interact.


January 23, 2014, Pure and Applied Logic Colloquium

Adam Bjorndahl, Cornell University
Language-based games

Abstract: We introduce a generalization of classical game theory wherein each player has a fixed "language of preference": a player can prefer one state of the world to another if and only if they can describe the difference between the two in this language. The expressiveness of the language therefore plays a crucial role in determining the parameters of the game. By choosing appropriately rich languages, this framework can capture classical games as well as various generalizations thereof (e.g. psychological games, reference-dependent preferences, and Bayesian games). On the other hand, coarseness in the language---cases where there are fewer descriptions than there are actual differences to describe---offers insight into some long-standing puzzles of human decision-making. The Allais paradox, for instance, can be resolved simply and intuitively using a language with coarse beliefs: that is, by assuming that probabilities are represented not on a continuum, but discretely, using finitely-many "levels" of likelihood (e.g. "no chance", "slight chance", "unlikely", "likely", etc.).

Many standard solution concepts from classical game theory can be imported into the language-based framework by taking their epistemic characterizations as definitional. In this way, we obtain natural generalizations of Nash equilibrium, correlated equilibrium, and rationalizability. We show that there are language-based games that admit no Nash equilibria using a simple example where one player wishes to surprise her opponent. By contrast, the existence of rationalizable strategies can be proved under mild conditions.

This is joint work with Joe Halpern and Rafael Pass.


January 27, 2014, Philosophy Colloquium

Anders Schoubye, University of Edinburgh
Type Ambiguous Names

Abstract: The orthodox view of proper names, Millianism, provides a very simple and elegant explanation of the semantic contribution (and semantic properties) of referential uses of names, namely names that occur as bare singulars and as the argument of a predicate. However, one problem for Millianism is that it cannot explain the semantic contribution of predicative uses of names (as in e.g. 'there are two Alberts in my class'). In recent years, an alternative view, so-called The-Predicativism, has become increasingly popular. According to The-Predicativists, names are uniformly count nouns. This straightforwardly explains why names can be used predicatively, but is prima facie less congenial to an analysis of referential uses. To address this issue, the-Predicativists argue that referential names are in fact count nouns flanked by a covert definite determiner - and so, a referential name is a (covert) definite description. In this talk, I will argue that despite the appearance of increased theoretical complexity, the view that names are ambiguous between predicative and referential types, is in fact superior to the unitary The-Predicativist view. However, I will also argue that to see why this (type) ambiguity view is better, we need to rethink the analysis of referential names - in particular, we need to give up the standard Millian analysis. Consequently, I will first propose an alternative analysis of referential names that (a) retains the virtues of Millianism, but (b) provides an important explanatory connection to the predicative uses. Once this analysis of names is adopted, the explanation for why names are systematically ambiguous between referential and predicative types is both simple and elegant.


January 30, 2014, Philosophy Colloquium

Noel Swanson, Princeton University
Cosmopolitan QFT: Lessons from Modular Theory for Contemporary Particle Physics

Abstract: I discuss three different foundational approaches to quantum field theory (Lagrangian, constructive, and algebraic) and argue that despite some recent claims to the contrary, there is no deep disagreement between them. Ultimately we expect to see convergence between these programs, although at present there are still significant technical and conceptual hurdles to overcome. Given the limitations of our current knowledge and the need for creative new ideas, I argue that philosophers of QFT should adopt a cosmopolitan approach to the subject which aligns itself more closely with scientific practice. In this vein, I give a philosophically-oriented introduction to Tomita-Takesaki modular theory, a new mathematical tool from algebraic QFT with an increasingly wide range of physical applications. I contend that modular theory stands in a unique position to help bridge the gap between various foundational programs. I focus on the tight connection between modular structure and spacetime symmetries captured by the Bisognano-Wichmann theorem, illustrating its utility by showing how the theorem can be used to give insight into the physical basis for parity-charge-time (PCT) symmetry.


February 3, 2014, Philosophy Colloquium

Benjamin George, Yale University
Question-Knowledge and the (Non-)Reducibility of Question Embedding

Abstract: There is a large class of question-embedders (the 'responsive' embedders, in Lahiri's terminology) that double as propositional attitudes: these include 'forget', 'learn', 'agree', 'be certain', and 'know'. That is, knowledge ascriptions (for example) can relate a person to a question (1) or a proposition (2):

1. Anne knows where she can buy a newspaper.

2. Anne knows that she can buy a newspaper at PaperWorld.

A standard approach in question semantics (explicitly endorsed by Higginbotham, but also seen in the practice of Karttunen, Groenendijk & Stokhof, and others) has been to analyze question-knowledge ascriptions like (1) as having truth conditions that can be reduced completely to propositional knowledge and the question-answer relationship, so that the truth or falsity of (1) depends only on the kinds of facts expressed by (2). (This has met with more suspicion in the knowledge literature, with Ryle and, more recently, Schaffer disputing the reductive characterization of question-knowledge, and others (e.g. Stanley) offering defenses against these criticisms.)

In this talk, I present a new case against reducibility, arguing that the truth of (1) is not entirely determined by facts about what answers Anne knows, but that it also depends on what potential answers Anne thinks she knows. This argument is neutral with respect to many details of the question-answer relationship, and corresponds to similar arguments for attitudes like forgetting and agreement. Although I argue that we must abandon reducibility, it is still desirable to capture the connection between the propositional and question-oriented uses of each responsive embedder: we would not want to say that the 'know' of (1) and the 'know' of (2) are unrelated attitudes. I address this with a semantics of question-embedding that limits the range of possible responsive embedders, linking question-knowledge and propositional knowledge without reducing the former to the latter.


February 10, 2014, Philosophy Colloquium

Julia Bursten, University of Pittsburgh
Surface Tensions: Challenges to Philosophy of Science from Nanoscience

Abstract: A traditional view of the structure of scientific theories, on which philosophers of science have based their accounts of explanation, modeling, and inter-theory relations, holds that scientific theories are composed of universal natural laws coupled with initial and boundary conditions. In this picture, universal laws play the most significant role in scientific reasoning. Initial and boundary conditions are rarely differentiated and their role in reasoning is largely overlooked. In this talk, I use the problem of modeling surfaces in nanoscience to show why this dismissal is deeply problematic both for philosophers of science and for scientists themselves.

In macroscopic-scale modeling, surfaces are treated as boundaries in the mathematical sense-that is, as infinitesimally thin borders of a system that confine its interior. As such, surface structure and behavior is usually modeled in an idealized manner that ignores most of the physics and chemistry occurring there. At the nanoscale, however, the structure and behavior of these surfaces significantly constrains the structure and behavior of the interior in more complex ways. Three important conclusions emerge:

1. The very concept surface changes as a function of scale, and other central concepts in nanoscience also behave in this scale-dependent manner.
2. The traditional view of theory described above does not adequately capture the nature of nanomaterials modeling, which requires attention to multiple models constructed at different characteristic scales. These component models do not comport well with a single set of universal laws, as the standard view suggests. Instead, boundary behaviors become crucial and models are designed to capture these behaviors.
3. The projects of nanomaterials modeling and synthesis dictate that divisions between boundaries and interiors must be continually adjusted. Overlooking this problem has led to failures of experimental design and interpretation of data.


February 13, 2014, Philosophy Colloquium

Vikash Mansinghka, MIT
Probabilistic computing for Bayesian inference

Abstract:Probabilistic modeling and Bayesian inference provide a unifying theoretical framework for uncertain reasoning. They have become central tools for engineering machine intelligence, modeling human cognition, and analyzing structured and unstructured data. However, they often seem far less simple, complete and expressive in practice than they are in theory. Domains such as robotics and statistics involve diverse modeling idioms, speed/accuracy requirements, dataset sizes, and approximation techniques. Also, inference in classic latent variable models can be computationally challenging, while state-of-the-art models for perception and data-driven discovery are difficult to even specify formally.

In this talk, I will describe probabilistic computing systems that address several of these challenges and that fit together into a mathematically coherent software and hardware stack for Bayesian inference. I will focus on Venture, a new, Turing-complete probabilistic programming platform descended from the Church probabilistic programming language. In Venture, models are represented by executable code, with random choices corresponding to latent variables. Venture builds on a generalization of graphical models to provide scalable, reprogrammable, general-purpose inference mechanisms, including hybrids of Markov chain, sequential Monte Carlo and variational techniques. I will describe applications in computer vision and high-dimensional statistics with a 100x savings in lines of code. I will also describe BayesDB, a system that lets users query some of the probable implications of tabular data as directly as SQL lets them query the data itself.


February 20, 2014, Philosophy Colloquium

Jonathan Wolff, University College London
Paying People to Act in Their Own Interests: Incentives versus Rationalisation

Abstract: A number of schemes have been attempted, both in public health and more generally within social programmes, to pay individuals to behave in ways that are presumed to be good for them or to have other beneficial effects. Such schemes are normally regarded as providing a financial incentive for individuals in order to outweigh contrary motivation. Such schemes have been attacked on the basis that they can 'crowd out' intrinsic motivation, as well as on the grounds that they are in some sense 'corrupt'. In response, they have been defended on the grounds that they can 'crowd in' improved motivation. I will argue that these debates have tended to overlook the difficulties individuals can have when attempting to behave against peer group norms. In some cases financial payments can allow individuals to defend their actions on the grounds that 'I am only doing it for the money' in circumstances when it would be difficult to defend their action on their real motivations. Examples of paying children to read books, and paying women to give up smoking in pregnancy will be discussed.


March 20, 2014, Philosophy Colloquium

Jeremy Heis, UC Irvine
Why Did Geometers Stop Using Diagrams?

Abstract: The consensus for the last century or so has been that diagrammatic proofs are not genuine proofs. Recent philosophical work, however, has shown that (at least in some circumstances) diagrams can be perfectly rigorous. The implication of this work is that, if diagrammatic reasoning in a particular field is illegitimate, it must be so for local reasons, not because of some in-principle illegitimacy of diagrammatic reasoning in general. In this talk, I try to identify some of the reasons why geometers in particular began to reject diagrammatic proofs. I argue that the reasons often cited nowadays -- that diagrams illicitly infer from a particular to all cases, or can't handle analytic notions like continuity -- played little role in this development. I highlight one very significant (but rarely discussed) flaw in diagrammatic reasoning: diagrammatic methods don't allow for fully general proofs of theorems. I explain this objection (which goes back to Descartes), and how Poncelet and his school developed around 1820 new diagrammatic methods to meet this objection. As I explain, these new methods required a kind of diagrammatic reasoning that is fundamentally different from the now well-known diagrammatic method from Euclid's Elements. And, as I show (using the case of synthetic treatments of the duals of curves of degrees higher than 2), it eventually became clear that this method does not work. Truly general results in "modern" geometry could not be proven diagrammatically.


April 10, 2014, Philosophy Colloquium

Michael Dunn, IU Bloomington
Contradictory Information: Better than the Nothing

Abstract: This is a kind of dual follow-up to my paper "Contradictory Information: Too Much of a Good Thing" (Journal of Philosophical Logic, Volume 39, Number 4, 425-452). In that earlier paper I embedded the “Belnap-Dunn 4-valued Logic” (Truth, Falsity, Neither, Both) into a context of probability by generalizing A. Jøsang’s “Subjective Logic” (“Artificial Reasoning with Subjective Logic,” Proceedings of the Second Australian Workshop on Commonsense Reasoning, Perth, 1997) to allow for degrees of belief, disbelief, and uncertainty. I extended this so as to split that third value into two kinds of uncertainty— that in which the reasoner has too little information (ignorance) and that in which the reasoner has too much information (conflict). Jøsang’s “Opinion Triangle” was thus expanded to an “Opinion Tetrahedron” with the 4-values as its vertices. I motivated the usefulness of this extension by the truism that the World Wide Web is full of contradictions. (I am not now, nor have I ever been a dialethist. I believe, as does Belnap, that there can be contradictions in our information -- theories, belief systems, databases, The World Wide Web, whatever --, but not in the “real world.”)

I review my earlier results but emphasize that contradictions are not always an entirely bad thing. I think we have all found in our googling that it is often better to find contradictory information on a search topic rather than finding no information at all. I explore some of the various reasons this may arise, which include finding that there is at least active interest in the topic, appraising the credentials of the informants, counting their relative number, assessing their arguments, trying to reproduce their experimental results, discovering their authoritative sources, etc. Any or all of these might apply to a particular case, and in sum they allow us to assign the contradictory information a coordinate in the Opinion Tetrahedron that is different than merely False. The general character of these reasons is pragmatic in character and has not so much to do with the content of the information but rather with its sources.