Carnegie Mellon University

Upcoming Workshop


HoTT 2019

August 12-17, 2019

The first international conference on Homotopy Type Theory (HoTT 2019) will be held at CMU the week of August 12-17, 2019.

Just beforehand, August 7-10, there will be a HoTT Summer School aimed at graduate students in Mathematics, Logic, and Computer Science.
For more information, watch the website: https://hott.github.io/HoTT-2019/


Past Workshops and Conferences

Philosophical Issues in Research Ethics
Center for Ethics and Policy

November 2-3, 2018

The CEP’s second biennial workshop on ethics and policy will bring scholars together from across North America to discuss philosophical issues in research ethics. The goal of this workshop is to promote more philosophically rigorous work in the field of research ethics and to bring more philosophers into the research ethics community.


Logic, Information, and Topology Workshop
Center for Formal Epistemology

Saturday, October 20th, 9-6pm
Baker Hall 136A - Adamson Wing

Dynamic epistemic logic concerns the information conveyed by the beliefs of other agents. Belief revision theory studies rational belief change in light of new information. Formal learning theory concerns systems that learn the truth on increasing information. Topology is emerging as a particularly apt formal perspective on the underlying concept of propositional information. The talks in this workshop address the preceding themes from a range of overlapping perspectives.

Workshop Organizers:
Kevin Kelly
Adam Bjorndahl

Invited Speakers:

KNOWABLE DEPENDENCY: A TOPOLOGICAL ACCOUNT

If to be is to be the value of a variable, then to know is to know a functional dependence between variables. (Moreover, the conclusion may still be arguably be true even if Quine's premise is wrong...) This points towards a fundamental connection between Hintikka's Epistemic Logic and Vaananen 's so-called Dependence Logic (itself anticipated by the Independence-Friendly Logic of Hintikka and Sandu). The connection was made precise in the Epistemic Dependence Logic introduced in my 2016 AiML paper. Its dynamics captures the widespread view of knowledge acquisition as a process of learning correlations (with the goal of eventually tracking causal relationships in the actual world). However, when talking about empirical variables in natural sciences, the exact value might not be knowable, and instead only inexact approximations can be known. This leads to a topological conception of empirical variables, as maps from the state space into a topological space. Here, the exact value of the variable is represented by the output of the map, while the open neighborhoods of this value represent the knowable approximations of the exact answer. I argue that knowability of a dependency amounts in such an empirical context to the continuity of the given functional correlation. To know (in natural science) is to know a continuous dependence between empirical variables.

Alexandru Baltag's Website

BELIEFS, PROPOSITIONS and DEFINITE DESCRIPTIONS

In this paper, we introduce a doxastic logic with expressions that are intended to represent definite descriptions for propositions. Using these definite descriptions, we can formalize sentences such as:

  • Ann believes that the strangest proposition that Bob believes is that neutrinos travel at twice the speed of light.
  • Ann believes that the strangest proposition that Bob believes is false.

The second sentence has both de re and de dicto readings, which are
distinguished in our logic. We motivate our logical system with a novel analysis of the Brandenburger-Keisler paradox. Our analysis of this paradox uncovers an interesting connection between it and the Kaplan-Montague Knower paradox.
(This is joint work with Wes Holliday)

Eric Pacuit's Website

THE EPISTEMOLOGY OF NONDETERMINISM

Propositional dynamic logic (PDL) is a framework for reasoning about nondeterministic program executions (or, more generally, nondeterministic actions). In this setting, nondeterminism is taken as a primitive: a program is nondeterministic iff it has multiple possible outcomes. But what is the sense of "possibility" at play here? This talk explores an epistemic interpretation: working in an enriched logical setting, we represent nondeterminism as a relationship between a program and an agent deriving from the agent’s (in)ability to adequately measure the dynamics of the program execution. More precisely, using topology to capture the observational powers of an agent, we define the nondeterministic outcomes of a given program execution to be those outcomes that the agent is unable to rule out in advance. In this framework, continuity turns out to coincide exactly with determinism: that is, determinism is continuity in the observation topology. This allows us to embed PDL into (dynamic) topological (subset space) logic, laying the groundwork for a deeper investigation into the epistemology (and topology) of nondeterminism.

Adam Bjorndahl's Website

THE TOPOLOGY OF STATISTICAL INQUIRY

Taking inspiration from Kelly's The Logic of Reliable Inquiry (1996), Baltag et. al. (2015) and Genin and Kelly (2015) provide a general topological framework for the study of empirical learning problems. Baltag et. al. (2015) prove a key result showing that it is almost always possible to learn by stable and progressive methods, in which the truth, once in the grasp of a learning method, is never relinquished. That work is grounded in a non-statistical account of information, on which information states decisively refute incompatible possibilities. However, most scientific data is statistical, and in these settings, logical refutation rarely occurs. Critics, including Sober (2015), doubt that the gap between propositional and statistical information can be bridged. In Genin (2018), I answer the skeptics by identifying the unique topology on probability measures whose closed sets are exactly the statistically refutable propositions. I also show that a statistical analogue of progressive learning can also be achieved in the more general setting. That result erects a topological bridge on which insights from learning theory can be ported directly into machine learning, statistics and the data-driven sciences.

Kasey Genin's Website

SPATIAL MODELS OF HIGHER-ORDER S4

Topological spaces provide a model for propositional S4 modal logic (McKinsey and Tarski 1944) in which the modal operators can be thought of as expressing verifiability and refutability (c.f. Schulte and Juhl 1996, Kelly 1996,...). It is natural to ask: is there a "spatial" notion of model which stands in the same relation to (modal S4) predicate logic as topology does to propositional logic?

Garner (2010) introduced ionads to provide a notion of "higher topological space". The sheaf semantics of Awodey and Kishida (2008) yields a special example of an ionad. A generalization of Garner's ionads is suggested here as a response to our question, in which the "points" will themselves often be mathematical structures (e.g. groups, rings,...), considered together with their isomorphisms. Any such generalized ionad is a model of (classical) higher-order S4 (by application of Zwanziger 2017). Furthermore, to any generalized ionad, we can associate a Grothendieck topos (analogous to the poset of opens of a topological space) that is generated canonically from a theory in the verifiable (geometric) fragment of first-order logic. Thus, generalized ionads may be of interest for applications to verifiability and refutability.

Colin Zwanziger's Website


A One-Day PRASI Writing Workshop

October 7, 2018 - 9 am–5 pm

Scaife Hall 219

Workshop Organizer: Martha Harty, Carnegie Mellon University

PRASIWorkshop Presenters:

  • Mary Adams Trujillo
  • Hasshan Batts
  • Beth Roy

PRASI is sponsoring an all-day workshop supporting all kinds of people to accomplish all kinds of writing.

The Practitioners Research and Scholarship Institute (PRASI) is a multicultural group of conflict transformation practitioners and writers dedicated to producing literature that reflects the full diversity of our society. Started within the conflict resolution community with the goal of creating a written basis for the profession based in the lived experience of practitioners of many cultural backgrounds, PRASI writing workshops are open to people who desire to write across a broad spectrum of genres: creative, literary, academic, professional.

To register or for information, email Beth Roy.


Workshop on Foundations of Causal Discovery
Center for Formal EpistemologyWorkshop Schedule

September 22-23, 2018
Baker Hall A53 Steinberg Auditorium
Workshop Organizers: Kevin Kelly and Kun Zhang

Rationale: It is well known that correlation does not directly imply causation. However, patterns of correlation or, more generally, statistical independence, can imply causation, even if the data are non-experimental. Over the past four decades, that insight has given rise to a rich exploration of causal discovery techniques. But there is an important epistemological catch. While it is true that one can sometimes deduce causation from the true independence relations among variables, in practice one must infer those relations from finite samples, and the chance of doing so in error is subject to no a priori bound. Furthermore, such inferences are sometimes based on assumptions that may be questioned, such as that causal pathways never cancel exactly, neutralizing their effects. This workshop brings together experts from philosophy, statistics, and machine learning, to shed fresh light on the special epistemological issues occasioned by causal discovery from non-experimental data.

This workshop is open to the public. For more information, contact the Carnegie Mellon Philosophy Department.

Invited Speakers:

Causal discovery from real-world data: relaxing the faithfulness assumption

Abstract: The so-called causal Markov and causal faithfulness assumptions are well-established pillars behind causal discovery from observational data. The first is closely related to the memorylessness property of dynamical systems, and allows us to predict observable conditional independencies in the data from the underlying causal model. The second is the causal equivalent of Ockham’s razor, and enables us to reason backwards from data to the causal model of interest.

A key motivation behind the workshop is the realisation that, though theoretically reasonable, in practice with limited data from real-world systems we often encounter violations of faithfulness. Some of these, like weak long-distance interactions, are handled surprisingly well by benchmark constraint-based algorithms such as FCI. Other violations may imply inconsistencies between observed (conditional) independence statements in the data, and cannot currently be handled effectively by most constraint based algorithms. A fundamental question is whether our output retains any validity when not all our assumptions are satisfied, or whether it is still possible to reliably rescue parts of the model.

In this talk we introduce a novel approach based on a relaxed form of the faithfulness assumption that is able to handle many of the detectable faithfulness violations while ensuring the output causal model remains valid. Effectively we obtain a principled form of error-correction on observed in/dependencies, that can significantly improve both accuracy and reliability of the output causal models in practice. True; it cannot handle all possible violations, but the relaxed faithfulness assumption may be a promising step towards a more realistic, and so more effective, underpinning of the challenging task of causal discovery from real-world systems.

Tom Claassen's website

Uncovering constitutive relevance relations in mechanisms

Abstract: In this paper I argue that constitutive relevance relations in mechanisms behave like a special kind of causal relation in at least one important respect: Under suitable circumstances constitutive relevance relations produce the Markov factorization. Based on this observation one may wonder whether standard methods for causal discovery could be fruitfully applied to uncover constitutive relevance relations. This paper is intended as a first step into this new area of philosophical research. I investigate to what extent the PC algorithm, originally developed for causal search, can be used for constitutive relevance discovery. I also discuss possible objections and certain limitations of a constitutive relevance discovery procedure based on PC.

Alexander Gebharter's website

Progressive Methods for Causal Search

Constraint-based methods for causal search typically start out by conjecturing sparse graphs with few causal relations and are driven to introduce new causal relationships only when the relevant conditional independencies are statistically refuted. Algorithms such as PC proceed by nesting sequences of conditional independence tests. Although several such methods are known to converge to the true Markov equivalence class in the limit of infinite data, there are infinitely many other methods that would have the same limiting performance, but make drastically different decisions on finite samples. Some of these alternative methods may even reverse the usual preference for sparse graphs for arbitrarily many sample sizes. What, then, justifies the standard methods? Spirtes et al. [2000] note that it cannot be the usual comforts of hypothesis testing since the error probabilities of the nested tests cannot be given their usual interpretation.

I propose a new way of justifying nested-test methods for causal search that provides both a stronger justification than mere point-wise consistency and a new interpretation for the error probabilities of the constituent tests. Say that a point-wise consistent method for investigating a statistical question is progressive if, no matter which answer is correct, the chance that the method outputs the correct answer is strictly increasing with sample size. Progressiveness ensures that collecting a larger sample is a good idea. Although it is often infeasible to construct strictly progressive methods, progressiveness ought to be a regulative ideal. Say that a point-wise consistent method is α-progressive if, for any two sample sizes n1 < n2, the chance of outputting the correct answer does not decrease by more than α. I show that, for all α>0, there exist α-progressive methods for solving the causal search problem. Furthermore, every progressive method must systematically prefer sparser causal models. The methods I construct carefully manage the error probabilities of the nested tests to ensure progressive behavior. That provides a new, non-standard, interpretation for the error probabilities of nested tests.

Kasey Genin's website

How to Tackle an Extremely Hard Learning Problem: Learning Causal Structures from Non-Experimental Data without the Faithfulness Assumption or the Like

Most methods for learning causal structures from non-experimental data rely on some assumptions of simplicity, the most famous of which is known as the Faithfulness condition. Without assuming such conditions to begin with, Jiji Zhang (Lingnan University) and I develop a learning theory for inferring the structure of a causal Bayesian network, and we use the theory to provide a novel justification of a certain assumption of simplicity that is closely related to Faithfulness. Here is the idea. With only the Markov and IID assumptions, causal learning from non-experimental data is notoriously too hard to achieve statistical consistency but we show that (1) it can still achieve a quite desirable ''combined'' mode of stochastic convergence to the truth: having almost sure convergence to the true causal hypothesis with respect to almost all causal Bayesian networks, together with a certain kind of locally uniform convergence. We also show that (2) every learning algorithm achieving at least that joint mode of convergence has this property: having stochastic convergence to the truth with respect to a causal Bayesian network N only if N satisfies a certain variant of Faithfulness, known as Pearl's Minimality condition---as if the learning algorithm were designed by assuming that condition. This new theorem, (1) + (2), explains why it is not merely optional but mandatory to assume the Minimality condition---or to proceed as if we accepted it---when experimental data are not available. I will explain the content of this new theorem, give a pictorial sketch of proof, and defend the philosophical foundation of the underlying approach. In particular, I will argue that it is true to the spirit by which Gold and Putnam created learning theory in the 1960's. And I will argue that the proposed approach can be embraced by many people in epistemology, including reliabilists and---perhaps somewhat surprisingly---even evidentialists and internalists in epistemology.

Hanti Lin's website

Graphical Models for Missing Data: Recoverability, Testability and Recent Surprises!

The bulk of literature on missing data employs procedures that are data-centric as opposed to process-centric and relies on a set of strong assumptions that are primarily untestable (eg: Missing At Random, Rubin 1976). As a result, this area of research is wanting in tools to encode assumptions about the underlying data generating process, methods to test these assumptions and procedures to decide if queries of interest are estimable and if so to compute their estimands.

We address these deficiencies by using a graphical representation called "Missingness Graph" which portrays the causal mechanisms responsible for missingness. Using this representation, we define the notion of recoverability, i.e., deciding whether there exists a consistent estimator for a given query. We identify graphical conditions (necessary and sufficient) for recovering joint and conditional distributions and present algorithms for detecting these conditions in the missingness graph. Our results apply to missing data problems in all three categories -- MCAR, MAR and NMAR -- the latter is relatively unexplored. We further address the question of testability i.e. whether an assumed model can be subjected to statistical tests, considering the missingness in the data.

Furthermore viewing the missing data problem from a causal perspective has ushered in several surprises. These include recoverability when variables are causes of their own missingness, testability of the MAR assumption, alternatives to iterative procedures such as EM Algorithm and the indispensability of causal assumptions for large sets of missing data problems.

Karthika Mohan's website

Lessons for causal discovery from Markov models

Causal discovery focuses on learning conditional independence relations in multivariate data. Results on independence testing seem to imply that this is very hard and perhaps unfeasible without strict assumptions. In this talk, I will go over analogies between the causal discovery problem and problems of learning the order and structure of different kinds of Markov models of time series and dynamical systems, which is also all about finding conditional independence relations in data. Here, however, there are a lot of positive results about model-selection consistency, and I will try to explore what insights Markov model discovery might have for causal discovery.

Cosma Shalizi's website

From Causal Inference to Gene Regulation

A recent break-through in genomics makes it possible to perform perturbation experiments at a very large scale. The availability of such data motivates the development of a causal inference framework that is based on observational and interventional data. We first characterize the causal relationships that are identifiable from interventional data. In particular, we show that imperfect interventions, which only modify (i.e., without necessarily eliminating) the dependencies between targeted variables and their causes, provide the same causal information as perfect interventions, despite being less invasive. Second, we present the first provably consistent algorithm for learning a causal network from a mix of observational and interventional data. This requires us to develop new results in geometric combinatorics. In particular, we introduce DAG associahedra, a family of polytopes that extend the prominent graph associahedra to the directed setting. We end by discussing applications of this causal inference framework to the estimation of gene regulatory networks.

Caroline Uhler's website

Flagpoles, Anyone? Independence, Invariance and the Direction of Causation

This talk will explore some recent ideas concerning the directional features of explanation (or causation). I begin by describing an ideal of explanation, loosely motivated by some remarks in Wigner’s Symmetries and Reflections. It turns out  (or so I claim) that broadly similar ideas are used in recent work from the machine learning literature to infer causal direction (Janzig et al. 2102).

One such strategy for inferring causal direction makes use of what I call assumptions about variable independence.   Wigner’s version of this idea is that when we find dependence or correlation among initial conditions we try to trace this back to further initial conditions or causes that are independent – we assume as a kind of default that it will be possible to find such independent causes. Applied to inferring  the direction of causation, a related thought is that if a relationship involves 3 (or more) measured variables X, Y, and Z,  or two variables X and Y and an unmeasured noise term  U and two of these are independent  but no other pairs are independent, then it is often reasonable to infer that the correct causal or explanatory direction is from the independent pair to the third variable.

A second strategy, which makes use of what I call value/ relationship independence, exploits the idea (also found in Wigner) that  we expect that  the  causal or explanatory relationship between two  variables (Xà Y,) will be  “independent” of  the value (s) of the variables that figure in the X or cause position of the relationship Xà Y. Here the relevant notion of independence (which of course is not statistical independence)  is linked to a notion of value-invariance described in Woodward (2003): if the Xà Y relationship is causal, we expect that relationship to be invariant under changes in the value of X. According to this strategy for inferring causal direction, if Xà Y is independent of (or invariant under) changes in X and the X—Y relationship in the opposite direction (Y—>X)  is not invariant under changes in Y, we should infer, ceteris paribus,  that the causal direction runs from X to Y rather than from Y to X.

As said earlier, I take both strategies from the machine learning literature—they are not original with me. But I believe that both strategies have a broader justification in terms of interventionist ideas I have discussed elsewhere as well as methodological ideas like those found in Wigner. I believe it may be useful to put these ideas within this more general context. I will also argue that these ideas lead to a satisfying solution to Hempel’s famous flagpole problem which asks why we think that the length of a flagpole can be used to explain its shadow but not conversely.

 References

  • Janzing, D., Mooij, J., Zhang, K., Lemeire, J., Zscheischler, J., Daniusis, D., Steudel, B. and Schölkopf, B. (2012) “Information-geometric Approach to Inferring Causal Directions” Artificial Intelligence 182-183: 1-31.
  • Wigner, E. (1967) Symmetries and Reflections
  • Woodward, J. (2003) Making Things Happen: A Theory of Causal Explanation

James F. Woodward's website

SAT-based causal discovery of semi-Markovian models under weaker assumptions

Abstract: For constraint-based discovery of Markovian causal models (a.k.a causal discovery with causal sufficiency), it has been shown (1) that the assumption of Faithfulness can be weakened in various ways without, in a sense, loss of its epistemological purchase, and (2) weakening of Faithfulness may help to speed up methods based on Boolean satisfiability (SAT) solvers. In this talk, I discuss (1) and (2) regarding causal discovery of semi-Markovian models (a.k.a causal discovery without causal sufficiency). Time permitting, I will also examine the epistemological significance of the fact that unlike Faithfulness, weaker assumptions do not necessarily survive marginalization.

Jiji Zhang's website

Causal modeling, statistical independence, and data heterogeneity

Causal discovery aims to reveal the underlying causal model from observational data. Recently various types of independence, including conditional independence between observed variables and independence between causes and noise, have been exploited for this purpose. In this talk I will show how causal discovery and latent variable learning (or concept learning) can greatly benefit from heterogeneity or nonstationarity of the data--data heterogeneity improves the identifiability of the causal model, and even allows us to identify the true causal model in the presence of a large number of hidden variables that are causally related. Finally, I will discuss the implication of the result in machine learning with deep structure.

Kun Zhang's website


NASSLLI 2018

June 25-29, 2018

We are excited to announce that in June 2018, the Department of Philosophy, with support from across the campus, will host the upcoming North American Summer School in Logic, Language and Information. NASSLLI is a biennial event inaugurated in 2001, which brings together faculty and graduate students from around the world, for a week of interdisciplinary courses on cutting edge topics at the intersection of philosophy, linguistics, computer science and cognitive science. The Summer School aims to promote discussion and interaction between students and faculty in these fields. High level introductory courses allow students in one field to find their way into related work in another field, while other courses focus on areas of active research. With its focus on formalization and on cross-disciplinary interactions, NASSLLI is a natural fit for us here at CMU. We are delighted to be hosting. The summer school will take place June 25-29, 2018, with preparatory events June 23-24.


Information, Causal Models and Model Diagnostics

April 14-15, 2018

Co-sponsored by the Info-Metrics Institute and Dietrich College of Humanities & Social Sciences

The fundamental concepts of information theory are being used for modeling and inference of problems across most disciplines, such as biology, ecology, economics, finance, physics, political sciences and statistics (for examples, see Fall 2014 conference celebrating the fifth anniversary of the Info-Metrics Institute).

The objective of spring 2018 workshop is to study the interconnection between information, information processing, modeling (or model misspecification and diagnostics) and causal inference. In particular, it focuses on modeling and causal inference with an information-theoretic perspective.

Background: Generally speaking, causal inference deals with inferring that A causes B by looking at information concerning the occurrences of both, while probabilistic causation constrains causation in terms of probabilities and conditional probabilities given interventions. In this workshop we are interested in both. We are interested in studying the modeling framework - including the necessary observed and unobserved required information - that allows causal inference. In particular we are interested in studying modeling and causality within the info-metrics - the science of modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information - framework. Unlike the more 'traditional' inference, causal analysis goes a step further: its aim is to infer not only beliefs or probabilities under static conditions, but also the dynamics of beliefs under changing conditions, such as the changes induced by treatments or external interventions.

This workshop will (i) provide a forum for the dissemination of new research in this area and will (ii) stimulate discussion among research from different disciplines. The topics of interest include both, the more philosophical and logical concepts of causal inference and modeling, and the more applied theory of inferring causality from the observed information. We welcome all topics within the intersection of info-metrics, modeling and causal inference, but we encourage new studies on information or information-theoretic inference in conjunction with causality, model specification (and misspecification). These topics may include, but are not limited to:

  • Causal Inference and Information
  • Probabilistic Causation and Information
  • Nonmonotonic Reasoning, Default Logic and Information-Theoretic Methods
  • Randomized Experiments and Causal Inference
  • Nonrandomized Experiments and Causal Inference
  • Modeling, Model Misspecification and Information
  • Causal Inference in Network Analysis
  • Causal Inference, Instrumental Variables and Information-Theoretic Methods
  • Granger Causality and Transfer Entropy
  • Counterfactuals, Causality and Policy Analysis in Macroeconomics

PROGRAM COMMITTEE

CONFIRMED INVITED SPEAKERS AND DISCUSSANTS


Category Theory Octoberfest

October 28-29, 2017

View slides from Dana Scott’s Talk: What is Explicit Mathematics?

The 2017 Category Theory Octoberfest will be held on the weekend of Saturday, October 28 and Sunday, October 29 at Carnegie Mellon University in Pittsburgh. Following the tradition of past Octoberfests, this is intended to be an informal meeting, covering all areas of category theory and its applications.

Talks by PhD students and young researchers are particularly encouraged!

Details and travel information can be found here:

https://www.andrew.cmu.edu/user/awodey/CToctoberfest/Octoberfest.html

Registration:

There is no registration fee.  Registration is optional, but participants are requested to contact the organizers in advance, especially if they would like to give a talk.  To register and/or submit a talk, please send email to the organizers with the following information: your name, will you give a talk (yes or no), the title of your talk (if yes).

Organizers:

Steve Awodey
Jonas Frey


Modality and Method Workshop

June 9 and 10, 2017 - Center for Formal Epistemology
Margaret Morrison 103

This workshop showcases cutting-edge applications of modality to an intriguing range of methodological issues, including reference, action, causation, information, and the scientific method. Following the tradition of CFE workshops, it is structured to provide ample time for real interaction with, and between, the speakers.

All are welcome to attend.
For more information please email.

Workshop Speakers:

Alexandru Baltag
Oxford University

Title: Knowing Correlations: how to use questions to answer other questions

Abstract: Informationally, a question can be encoded as a variable, taking various values ("answers") in different possible worlds. If, in accordance to the recent trend towards an interrogative epistemology, "To know is to know the answer to a question" (Schaffer), then we are lead to paraphrasing the Quinean motto: To know is to know the value of a variable. There are two issues with this assertion. First, questions are never investigated in isolation: we answer questions by reducing them to other questions. This means that the proper object of knowledge is uncovering correlations between questions. To know is to know a functional dependence between variables.

Second, when talking about empirical questions/variables, the exact value/answer might not be knowable, and instead only "feasible answers" can be known: this suggests a topology on the space of possible values, in which the open neighborhoods of the actual value represent the feasible answers (knowable approximations of the actual value). A question Q epistemically solves question Q' if every feasible answer to Q' can be known if given some good enough feasible answer to Q. I argue that knowability in such an empirical context amounts to the continuity of the functional correlation. To know is to know a continuous dependence between variables.

I investigate a logic of epistemic dependency, that can express knowledge of functional dependencies between (the values of) variables, as well as dynamic modalities for learning new such dependencies. This dynamic captures the widespread view of knowledge acquisition as a process of learning correlations (with the goal of eventually tracking causal relationships in the actual world).

There are interesting formal connections with Dependence Logic, Inquisitive Logics, van Benthem's Generalized Semantics for first order logic, Kelly's notion of gradual learnability (as well as the usual learning-theoretic notion of identifiability in the limit), and philosophically with Situation Theory and the conception of "information-as-correlation". 

Adam Bjorndahl
Carnegie Mellon University

Title: Logic and Topology for Knowledge, Knowability, and Belief

Abstract: In recent work, Stalnaker (2006) proposes a logical framework in which belief is realized as a weakened form of knowledge. Building on Stalnaker's core insights, and using frameworks developed in (Bjorndahl 2016) and (Baltag et al. 2016), we employ topological tools to refine and, we argue, improve on this analysis. The structure of topological subset spaces allows for a natural distinction between what is known and (roughly speaking) what is knowable; we argue that the foundational axioms of Stalnaker’s system rely intuitively on both of these notions. More precisely, we argue that the plausibility of the principles Stalnaker proposes relating knowledge and belief relies on a subtle equivocation between an "evidence-in-hand" conception of knowledge and a weaker "evidence-out-there" notion of what could come to be known. Our analysis leads to a trimodal logic of knowledge, knowability, and belief interpreted in topological subset spaces in which belief is definable in terms of knowledge and knowability. We provide a sound and complete axiomatization for this logic as well as its uni-modal belief fragment. We also consider weaker logics that preserve suitable translations of Stalnaker's postulates, yet do not allow for any reduction of belief. We propose novel topological semantics for these irreducible notions of belief, generalizing our previous semantics, and provide sound and complete axiomatizations for the corresponding logics.

This is joint work with Aybüke Özgün.

Michael Caie

University of Pittsburgh

Title: Classical Opacity

Abstract: In Frege's well-known example, Hesperus was known by the Greeks to rise in the evening, and Phosphorus was not known by the Greeks to rise in the evening, even though Hesperus is Phosphorus. A predicate F such that for some a and b, a=b, Fa and not Fb is said to be opaque. Opaque predicates appear to threaten the classical logic of identity. The responses to this puzzle in the literature either deny that there are cases of opacity in this sense, or deny that one can use classical quantificational logic when opacity is in play. In this paper we motivate and explore the view that there are cases of opacity and that classical quantificational logic is valid even when quantifying in to opaque contexts. We develop the logic of identity given these assumptions in the setting of higher-order logic. We identify a key choice-point for such views, and then develop alternative theories of identity depending on how one makes this choice. In closing, we discuss arguments for each of the two theories.

Melissa Fusco
Columbia University

Title: Deontic Modality and Classical Logic

Abstract: My favored joint solution to the Puzzle of Free Choice Permission (Kamp 1973) and Ross's Paradox (Ross 1941) involves (i) giving up the duality of natural language deontic modals, and (ii) moving to a two-dimensional propositional logic which has a classical Boolean character only as a special case.  In this talk, I'd like to highlight two features of this radical view: first, the extent to which Boolean disjunction is imperiled by other natural language phenomena not involving disjunction, and second, the strength of the general position that natural language semantics must treat deontic, epistemic, and circumstantial modals alike.

Dmitri Gallow
University of Pittsburgh

Title: Learning and Value Change

Abstract: Accuracy-first accounts of rational learning attempt to vindicate the intuitive idea that, while rationally-formed belief need not be true, it is nevertheless likely to be true. To this end, they attempt to show that the Bayesian’s rational learning norms are a consequence of the rational pursuit of accuracy. Existing accounts fall short of this goal, for they presuppose evidential norms which are not and cannot be vindicated in terms of the single-minded pursuit of accuracy. They additionally fail to vindicate the Bayesian norm of Jeffrey conditionalization; the responses to uncertain evidence which they do vindicate are not epistemically defensible. I propose an alternative account according to which learning rationalizes changes in the way you value accuracy. I show that this account vindicates the Bayesian’s norm of conditionalization in terms of the single-minded pursuit of accuracy, so long as accuracy is rationally valued.

Franz Huber
University of Toronto

Title: The Modality underlying Causality

Abstract: I will discuss the relationship between extended causal models, which represent two modalities (causal counterfactuals and normality), and counterfactual models, which represent one modality (counterfactuals). 

It is shown that, under a certain condition, extended causal models that are acyclic can be embedded into counterfactual models. The relevant condition is reminiscent of Lewis (1979) "system of weights or priorities" that governs the similarity relation of causal counterfactuals. In concluding I will sketch modal idealism, a view according to which the causal relationship is a mind-dependent construct.

Kevin T. Kelly and 

Konstantin Genin
Carnegie Mellon University

Title: What is Statistical Deduction?

Abstract: The philosophy of induction begins by drawing a line between deductive and inductive inference. That distinction is clear when empirical information can be modeled as a non-trivial proposition that restricts the range of theoretical possibilities—inference is deductive when every possibility of error is excluded by the premise. Recently, topological methods have been used with success to characterize the boundary between induction and deduction for propositional information of that kind. The basic idea is that that the possible, propositional information states constitute a topological space in which the deductively verifiable propositions are open sets. Then refutable propositions are closed sets, decidable propositions are closed-open, and more general topological concepts characterize the hypotheses that are decidable, verifiable, or refutable in the limit. A new justification of inductive inference emerges thereby—an inductive method is justified insofar as it achieves the best possible sense of success, given the topological complexity of the inference problem faced. That revealing, topological approach to empirical information does not apply directly to statistical inference, because statistical information typically rules out no theoretical possibilities whatever—the sample might just be very unlucky. For that reason, the received view in the philosophy of science has been that all statistical inference is inductive. However, some statistical inferences are evidently very similar to deductive inferences—e.g., rejecting a sharp null hypothesis or generating a confidence interval—whereas others are more similar to inductive inferences—e.g., accepting a sharp null hypothesis or selecting a statistical model. The basis for the analogy is that statistically deductive inferences are ''nearly deductive’’, in the sense that they are performed with a guaranteed low chance of error. The key to connecting the topological-propositional perspective on information with statistics is, therefore, to identify the unique topology for which the propositions that are verifiable with low chance of error are exactly the open propositions. In this talk, we show how to do just that. The result opens the door to a free flow of logical/topological insights into statistical methodology.

Tamar Lando
Columbia University
Title: Topology and Measure in Logics for Point-Free Space

Workshop on Exploitation and Coercion

Nov 4-5, 2016 - Center for Ethics & Policy

The Center for Ethics & Policy at Carnegie Mellon University invites paper abstracts for an inaugural Workshop on Ethics and Policy to be hosted November 4-5, 2016 at the CMU campus in Pittsburgh, PA. We are pleased to welcome Richard Arneson as our keynote speaker. In celebration of the 20th Anniversary of the publication of Alan Wertheimer's seminal work Exploitation, the theme for our inaugural workshop is "Exploitation and Coercion".

Download CFP


Attitudes and Questions Workshop

June 10 and 11, 2016 - Center for Formal Epistemology

Question embedding in natural language allows a subject to be related to a question by either a (traditionally) propositional attitude like knowledge and forgetting, or an (apparently) inherently question-oriented predicate like asking or wondering. Attitudes held of questions are an important locus of research into the semantics of both interrogative clauses and clause-embedding verbs, closely connected with the notion of the answerhood conditions of a question, and with the operations of composition involved in combining these types of predicates with semantically heterogeneous arguments. The attitudes that relate us to questions are also of considerable epistemic interest, touching on the nature of the knowledge relation and on the way that questions structure our inquiries. This workshop aims to bring together a diverse group of experts on the semantics and epistemic issues raised by these phenomena, to promote exchange of theoretical perspectives and approaches, and to help to move forward current work on questions and attitudes.

Workshop Schedule

Workshop Speakers:

Yimei Xiang

Harvard University

Sensitivity to false answers in indirect questions

Abstract:
Interpretations of indirect questions exhibit sensitivity to false answers (FAs). For instance, for John knows who came being true, John must have no false belief as to who came. This paper focuses on the following two facts, which challenge the current dominant view that FA-sensitivity is derived by exhaustifications (Klinedinst & Rothschild 2011): first, FA-sensitivity is involved in interpreting indirect mention-some questions (e.g., John knows where we can buy an Italian newspaper.) (George 2011, 2013); second, FA-sensitivity is concerned with all types of false answers, not just those that can be complete.

Carlotta Pavese

Duke University

Reducibility, George's challenge, and Intermediate Readings: In search for an Alternative Explanation

Abstract:
In my talk I consider a phenomenon that has been used in arguments against the reducibility of knowledge-wh to knowledge-that (George 2013). I defend a new account of the phenomenon that is compatible with reducibility and I argue that it is explanatorily more satisfying than alternative reducibility-unfriendly analyses.

Danny Fox

Massachusetts Institute of Technology

Mention Some, Reconstruction, and Free Choice

Abstract:
The goal of this talk is to present an account of the distribution of “mention some” readings of questions (MS) and to discuss some of the challenges that this account faces. The account will be based on the observation that MS arises only when an existential quantifier intervenes between a wh-phrase and its trace (c.f. George 2011). This observation will be used to argue that reconstruction is necessary for MS and that the notion of exhaustification that reveals itself in the presence of existential quantifiers (resulting in, so called, Free Choice effects) is a crucial component, as well.

Alexandre Cremers

École Normale Supérieure,
Laboratoire de Sciences Cognitives et Psycholinguistique (LSCP)

Plurality effects and exhaustive readings of embedded questions


Abstract:
Questions share many properties with plural nouns. Most famously, Berman (1991) showed that embedded questions can be modified by adverbs of quantity such as 'mostly' or 'in part' (quantificational variability effect). They also give rise to cumulative readings (Lahiri, 2002), and homogeneity effects (observed but not implemented). It has also been shown recently that questions embedded under verbs like 'know' are ambiguous between different exhaustive readings (weak, strong, intermediate). This ambiguity is usually seen as an orthogonal issue, and most recent literature on the various levels of exhaustivity completely ignores plurality effects. I will show how an updated version of Lahiri's (2002) proposal can be combined with ideas from Klinedinst & Rothschild (2011) to yield a theory of strong and intermediate readings on par with recent theories of plurality effects of definite plurals (e.g., homogeneity, cumulative readings) and at the same time compatible with recent experimental results.

Benjamin Spector

Institut Jean Nicod & Ecole Normale Supérieure

Predicting the presuppositions triggered by responsive predicates

Abstract:
Most responsive predicates (predicates which can take both a declarative and an interrogative as an argument, e.g. 'know') are presupposition triggers when they embed declaratives (e.g, x knows that p presupposes p). This raises the question how the presuppositions triggered by responsive verbs when they take a declarative complement are inherited when such verbs take an interrogative complement (assuming that the interrogative-taking use is derived from the declarative-taking use). In Spector & Egré (2015), we made a proposal which seems empirically quite well motivated, but which is stipulative, in that it is not derived from an independently motivated theory of presupposition projection. In this talk, I will show, focusing mostly on polar questions, that it is at the very least very hard to come up with a theory which satisfies simultaneously the two following desiderata:
a) providing a general and uniform semantics for embedded questions under responsive predicates in which the meaning of P+interrogative is deducible from that of P+declarative.
b) deriving the presuppositions of the 'P+interrogative' construction on the basis of current explanatory approaches to presupposition projection.
I will discuss the implications of this observation for theories of embedded questions.

Konstantin Genin

Carnegie Mellon University

Simplicity and Scientific Questions

Abstract:

Ockham’s razor instructs the scientist to favor the simplest theory compatible with current information. There is a broad consensus that simplicity is a principal consideration guiding inductive inference in science. But that familiar observation raises several subtle questions. When is one theory simpler than another? And why should one prefer simpler theories if there is no guarantee that simpler theories are — in some objective sense — more likely to be true? We present a model of empirical inquiry in which simplicity relates answers to an empirical question, and is grounded in the underlying information topology, the topological space generated by the set of possible information states inquiry might encounter. We show that preferring simple theories is a necessary condition for optimally direct convergence to the truth, where directness consists in avoiding unnecessary cycles of opinion on the way to the truth. Our approach relates to linguistics in two ways. First, it illustrates how questions under discussion can shape simplicity and, hence, the course of theoretical science. Second, it explains how, and in what sense, empirical simplicity can serve as a theoretical guide in empirical linguistics.

B. R. George

Carnegie Mellon University

The False Belief Effect for know wh and its Conceptual Neighbors

Abstract:
Spector (2005, 2006) and George (2011, 2013) suggest that the truth know wh ascriptions may depend on which false beliefs the subject of know holds, independent of their propositional knowledge. In this talk, I try to introduce the problem, and to provide a (mostly informal) overview of its apparent connections with exhaustification, presupposition, and the semantics of question-embedding predicates other than know. I try to identify some relevant issues and perspectives, and to highlight a few potential challenges and promising generalizations.

Jonathan Phillips

Harvard University



"Differentiating Contents" CFE/Linguistics Workshop

Saturday, December 5, 2015 - Carnegie Mellon, Baker Hall, Dean’s Conference Room, 154R

A variety of phenomena have motivated researchers to distinguish between different types of linguistic content. One classical distinction is that made by Austin (1962) and Searle (1969) between the propositional content of utterances and their speech act force. Another classical distinction is that between assertoric and presupposed content (Frege 1893, Strawson 1950, Stalnaker 1974, inter alia). In recent years, a new distinction between at-issue and not at-issue content (Potts 2005, Simons et al. 2010) has been introduced, to some extent offered as a replacement for the asserted/presupposed distinction. One empirical domain where the at-issue/not at-issue distinction has been utilized by some researchers is in the study of evidentials, a category of linguistic forms which provide information about the speaker’s evidential relation to the (remaining) content of her utterance.

This one day workshop will bring together researchers with intersecting work on the nature of these distinctions, on the empirical evidence for them, and on how to model them.

Workshop Schedule


Fifteenth conference on Theoretical Aspects of Rationality and Knowledge (TARK 2015)
Co-sponsored by the Center for Formal Epistemology

June 4-6, 2015 - Carnegie Mellon


Pitt/CMU Graduate Student Conference

March 20-21, 2015 - Carnegie Mellon
Locations: Mellon Institute, Room 348 (March 20) and Margaret Morrison, Room A14 (March 21)


Workshop on Simplicity and Causal Discovery
Co-sponsored by the Center for Formal Epistemology

June 6-8, 2014 - Carnegie Mellon


Modal Logic Workshop: Consistency and Structure
Co-sponsored by the Center for Formal Epistemology

Saturday, April 12, 2014 - Carnegie Mellon


Trimester: Semantics of Proofs and Certified Mathematics Trimester at the Institut Henri Poincare

April 7 - July 11, 2014 - Paris, France


Workshop: Philosophy of Physics

September 7, 2013
With Hans Halvorson (Princeton University) and James Weatherall (UC Irvine)


Conference: Type Theory, Homotopy Theory, and Univalent Foundations

September 23-27, 2013 - Barcelona, Spain


Workshop: Case Studies of Causal Discovery with Model Search

October 25-27, 2013 - Carnegie Mellon