Carnegie Mellon University
2014-2015

2014-2015 Lectures & Colloquia

Fall


Thursday, September 25, 2014, Philosophy Colloquium

Patrick Forber, Tufts University
A spiteful wrench in the works: rethinking the evolution of (human) cooperation
Reception: 4:10 DH 4301, Talk: 4:45-6:00 BH A53

The evolution of cooperation presents a puzzle: why pay a cost to help another when I do better by looking after my own interests? This puzzle has a number of solutions—group selection, conditional strategies, punishment, and so on—and philosophers of science have played some role in the development and refinement of these solutions. In part, because of the complexity of the controversy surrounding group selection, but also because of the importance of cooperation to understanding the evolution of human social behavior, even human morality. However, many explanations of cooperation overlook the role spite can play in the evolution of social behavior. In this talk I will present some results on the evolution of spite developed with my collaborator, Rory Smead, and discuss the philosophical consequences of these results. In particular, I will focus on how we measure the fitness of social behavior, and the connection, if any, to human evolution and morality.


Thursday, October 2, 2014, Philosophy Colloquium

Sherri Roush, King's College London
Rational Self-Doubt: The Re-calibrating Bayesian
Reception: 4:10 DH 4301, Talk: 4:45-6:00 BH A53

If one is highly confident that #3 in the line-up is the murderer from having witnessed the crime, and then learns of the substantial experimental psychology evidence that human beings are unreliable and overconfident at eyewitness testimony, is one thereby obligated to reduce one’s confidence about #3? How far, and why? This question corresponds to a challenge posed in 1980 to the effect that a Bayesian cannot coherently re-calibrate on the basis of feedback about his prior performance. I represent the feedback as second-order evidence, and generalize 1st-order Bayesian rationality constraints away from idealizations in a principled way, to give a rule for proportionally revising 1st order beliefs on the basis of 2nd-order evidence about one’s reliability. It is a conditionalization rule that re-calibrates the subject and sidesteps standard objections to calibration. It shows why taking doubt about one’s own judgment seriously does not need to end up in incoherence or runaway skepticism, and what the added value of this kind of evidence is.


Thursday, October 16, 2014, Philosophy Colloquium

Louis Narens, UC Irvine
Context and Probability
Reception: 4:10 DH 4301, Talk: 4:45-6:00 BH A53

Standard probability theory has been enormously productive in science. But there are instances where it fails to provide adequate concepts and mathematical methods. Decision theory and quantum mechanics are examples. In these cases, context can interact with the phenomena of interest in ways that standard probability theory does not productively capture-that is, in ways that standard probability theory does not provide insights and methods for useful modeling and apparently fails to capture key concepts. This talk will present alternatives to standard probability theory by changing the logical structure of event space from boolean algebras to other algebras in order to accommodate context, and will apply some of the results involving the new algebras to rationality issues in philosophy and the behavioral sciences.


Thursday, November 6 2014, Philosophy Colloquium

Harvey Lederman, University of Oxford
Uncommon Knowledge
Reception: 4:10 DH 4301, Talk: 4:45-6:00 BH A53

A group commonly knows a proposition if all know it, all know that all know it, and so on (and similarly for common belief). An important argument for the possibility of achieving common knowledge and belief is an abductive one: these states provide the best explanation of human behavior, and in particular coordination behavior. This paper argues that the abductive argument for common knowledge and belief fails.


Friday, November 7 2014, Philosophy Colloquium

Harvey Lederman, University of Oxford
Prospects for a Naive Theory of Classes (with Hartry Field and Tore Fjetland Øgaard)
12pm-1:30pm, Doherty Hall 4303

We examine the prospects for a naïve theory of classes, in which full “naïve” comprehension and an extensionality rule are maintained by weakening the background logic. Without extensionality, proving naïve comprehension consistent is formally analogous to proving naïve truth consistent, and in recent years much progress has been made on the latter question. But there is no natural analog for extensionality in the case of truth, so the question arises whether these logics for reasoning about truth can also be shown consistent with a form of extensionality. In a series of papers, and in his 2006 book, Ross Brady has presented various theories of naïve classes. We begin by providing a simpler, more accessible version of Brady’s proof of the consistency of these theories. Our new presentation of Brady then makes it easy to see how Brady’s result can be generalized to apply to certain logics which have a modal-like semantics given using four-valued, as opposed to three-valued worlds. But we argue that even these improved Brady-like logics are too weak for reasoning about classes. Worse yet, we conclude with an impossibility result which shows fairly decisively that one cannot hope to do significantly better.


Thursday, November 13 2014, Philosophy Colloquium

Wilfried Sieg, Carnegie Mellon University
What is the concept of computation?
Reception: 4:10 DH 4301, Talk: 4:45-6:00 BH A53

The Church-Turing Thesis asserts that particular mathematical notions are adequate to represent informal notions of effective calculability or mechanical decidability. I first sketch contexts that called for such adequate mathematical notions, namely, problems in mathematics (e.g., Hilbert’s 10th problem), decision problems in logic (e.g., the Entscheidungsproblem for firstorder logic), and the precise characterization of formality (for the general formulation of Gödel’s incompleteness theorems).

The classical approach to the effective calculability of number theoretic functions led, through Gödel and Church, to a notion of computability in logical calculi and metamathematical absoluteness theorems. The classical approach to the mechanical decidability of problems concerning syntactic configurations led, through Turing and Post, to a notion of computability in formal calculi (canonical systems) and metamathematical representation theorems.

Particular features of formal calculi motivate the formulation of an abstract concept of acomputable dynamical system. This concept articulates finiteness and locality conditions that are satisfied by the standard concrete notions of computation. In addition, a representation theorem can be established: Turing machines can simulate the computations of any concrete system falling under the abstract concept. I sketch a generalization of this approach to obtain computable parallel dynamical systems. Some applications will conclude my discussion.


Thursday, November 20, 2014, Philosophy Colloquium

Philip Ehrlich, Ohio University
Cantorian and non-Cantorian Theories of Finite, Infinite and Infinitesimal Numbers and the Unification Thereof
Reception: 4:10 DH 4301, Talk: 4:45-6:00 BH A53

In addition to Cantor’s well-known systems of infinite cardinals and ordinals, there was a variety of other important though less well-known systems of actual infinite numbers that emerged in the decades bracketing the turn of the twentieth-century. Some grew out of work on the rates of growth of real functions, and others emerged from the pioneering investigations of non-Archimedean ordered algebraic and geometric systems. Unlike Cantor’s number systems, which were designed to answer questions such as

How many members are there in a given infinite set?
or
What position is occupied by a member of an infinite well-ordered set?

these non-Cantorian number systems were respectively designed to answer questions such as

How rapidly do the real-valued functions lnx and ex go to infinity compared to the real-valued function x2 as well as to each other?
and
How long is a given line segment that is infinitely large or infinitesimally small compared to a unit segment?

In recent decades, theses number systems have been enjoying a robust resurgence in interest as part of a more general interest in non-Archimedean ordered fields and relational expansions thereof. Unlike Cantor’s number systems, which solely embrace finite numbers alongside his well-known infinite numbers, the just-said non-Cantorian number systems, like the better-known hyperreal number systems associated with Abraham Robinson’s nonstandard approach to analysis, embody finite and infinite as well as infinitesimal numbers.

In [Ehrlich 2012], we show how the above-mentioned Cantorian and non-Cantorian number systems admit a striking unification in the author’s [Ehrlich 2001] algebraico-tree-theoretic approach to J. H. Conway’s system of surreal numbers. Building on the above, in this paper we will provide introductions to the aforementioned non-Cantorian theories of the finite, infinite and infinitesimal that emerged in the decades bracketing the turn of the twentieth-century, explain the motivation for their introduction, outline the roles these and related theories play in contemporary mathematics and discuss the relations between these theories and the better-known theories of Cantor and Robinson that emerge from the just-said unification. It is the author’s hope that by drawing attention to the spectrum of theories of the infinite and the infinitesimal that have emerged from non-Archimedean mathematics since the latter decades of the 19th century, it will become clear that the standard 20th-century histories and philosophies of the actual infinite and the infinitesimal that are motivated largely by Cantor’s theory of the infinite and by non-standard analysis are not only limited in scope but are inspired by an account of late 19th- and early 20th-century mathematics that is as mathematically myopic as it is historically flawed.

Philip Ehrlich, Number Systems with Simplicity Hierarchies: A Generalization of Conway’s Theory of Surreal Numbers, The Journal of Symbolic Logic 66 (2001), pp. 1231-1258.

Philip Ehrlich, The Absolute Arithmetic Continuum and the Unification of All Numbers Great and Small, The Bulletin of Symbolic Logic 18 (2012), pp. 1-45.

Spring


Monday, January 12, 2015, Philosophy Colloquium

Ilya Shpitser, University of Southampton
Mediation: From Intuition to Data Analysis
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: Modern causal inference links the "top-down" representation of causal intuitions and "bottom-up" data analysis with the aim of choosing policy. Two innovations that proved key for this synthesis were a formalization of Hume's counterfactual account of causation using potential outcomes (due to Jerzy Neyman), and viewing cause effect relationships via directed acyclic graphs (due to Sewall Wright). I will briefly review how a synthesis of these two ideas was instrumental in formally representing the notion of "causal effect" as a parameter in the language of potential outcomes, and discuss a complete identification theory linking these types of causal parameters and observed data, as well as approaches to estimation of the resulting statistical parameters.

I will then describe, in more detail, how my collaborators and I are applying the same approach to mediation, the study of effects along particular causal pathways. I consider mediated effects at their most general: I allow arbitrary models, the presence of hidden variables, multiple outcomes, longitudinal treatments, and effects along arbitrary sets of causal pathways. As was the case with causal effects, there are three distinct but related problems to solve -- a representation problem (what sort of potential outcome does an effect along a set of pathways correspond to), an identification problem (can a causal parameter of interest be expressed as a functional of observed data), and an estimation problem (what are good ways of estimating the resulting statistical parameter). I report a complete solution to the first two problems, and progress on the third. In particular, my collaborators and I show that for some parameters that arise in mediation settings, triply robust estimators exist, which rely on an outcome model, a mediator model, and a treatment model, and which remain consistent if any two of these three models are correct.

Some of the reported results are a joint work with Eric Tchetgen Tchetgen, Caleb Miles, Phyllis Kanki, and Seema Meloni.


Thursday, January 15, 2015, Philosophy Colloquium

Robin Zheng, University of Michigan
Moral Responsibility for Implicit Bias: Attributability, Accountability, and Appraisal
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: Although overt discrimination on the basis of race, gender, and other such categories has been decline for decades, a large body of social scientific research has shown that people are subject to “implicit” biases of which they remain unaware, and which influence their judgment and behavior in ways they would not endorse. I argue that the problem of moral responsibility for implicit bias requires attending to the distinction between responsibility as attributability and responsibility as accountability, which license what I call “appraisal-based” and “non-appraising” responses. We are morally responsible for our actions in the attributability sense only when they reflect the practical identities that define us as moral agents, while we are responsible in the accountability sense when it is appropriate for others to enforce certain expectations and demands on those actions. I contend that we may sometimes lack attributability for actions caused by implicit bias, but that even in those cases we are still accountable for them. Hence, we should eschew responses such as blame and punishment in favor of non-appraising responses that assign burdens for dealing with the consequences of the action without any assessment of the person. I provide further moral-theoretical and psychological grounds for the distinction between appraisal-based and non-appraising responses, arguing that the latter not only does greater justice to our moral experience and agency, but will also be more practically effective in bringing about positive change.

Monday, January 19, 2015, Philosophy Colloquium

Paolo Santorio, University of Leeds
Alternative Counterfactuals
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: Most contemporary theories of counterfactuals in philosophy and formal semantics are descendants of Stalnaker, Lewis, and Kratzer's comparative closeness accounts. Roughly, on these accounts, a counterfactual "If p, would q" is true just in case the closest p-worlds are q-worlds. Comparative closeness semantics has been extremely successful, managing to explain a vast amount of data with one simple idea. But a closer look at the facts reveals two problems. First, a large amount of data that are ruled in by standard accounts turn out to be infelicitous. Second, and worse, on some very weak assumptions it can be proved that some of the problematic counterfactuals quantify over worlds that count as nonclosest, on any construal of the closeness relation. These problems are handled by a semantics that exploits scalar alternatives. In intuitive terms, the idea is that a counterfactual "If p, would q" quantifies over ways of making p true, and checks whether all of them verify q. The resulting semantics is hyperintensional (it doesn't vindicate substitution of necessarily equivalent propositions), but in a way that is different and much tamer than other hyperintensional accounts.

Thursday, January 22, 2015, Philosophy Colloquium

Gina Schouten, Illinois State University
Is the Gendered Division of Labor a Problem of Distribution?
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: Despite women’s increased labor force participation, household divisions of labor remain highly unequal, with women in every industrialized country continuing to perform the vast majority of unpaid housework and childcare. This persistent gendered division of labor is remediable. Properly implemented, “gender egalitarian” political interventions such as work time regulation, subsidized dependent care provisions, and paid family leave initiatives can induce families to share paid work, unpaid work, and leisure time more equally than they currently do. In the long run, these interventions can effectively reform the norms and institutions that currently sustain the gendered division of labor.

Gender egalitarian political interventions face a formidable justificatory hurdle, however. By subsidizing gender egalitarian lifestyles, these interventions appear to violate a basic liberal requirement for legitimacy: that political interventions be publicly defensible within the justificatory community of reasonable citizens. In order for interventions to be defensible in this way, the reasons justifying intervention must be neutral among the conceptions of the good that citizens may reasonably embrace. By this standard, interventions aimed at influencing families’ allocations of work appear illegitimate. They apparently fail to abide by the neutrality constraint on legitimate exercises of political power, because many citizens consciously enact and even celebrate gender inegalitarian domestic arrangements. Thus, the value of gender egalitarianism seems not to be a value that can be recognized as such by all reasonable citizens; it therefore cannot be invoked to justify exercises of political power like gender egalitarian interventions without violating the constraint of neutrality. Some proponents of gender egalitarian interventions have devised elegant arguments for the conclusion that these interventions can be defended without violating the constraint of neutrality, and are thus legitimate after all. My project in this paper is to critique one widely-deployed strategy for defending gender egalitarian political interventions. According to this strategy, the gendered division of labor constitutes or causes unjust distributions of goods, and gender egalitarian interventions can be neutrally justified as necessary means to remedy those injustices. Whether or not such interventions can ultimately be shown to be legitimate, I raise doubts that the strategy I consider meets this burden. But the problems that beset this strategy are illuminating, and point the way toward a more promising approach. In closing, I briefly sketch my own positive view regarding how gender egalitarian interventions can be defended as legitimate exercises of political power that abide fully by the constraint of neutrality.


Monday, January 26, 2015, Philosophy Colloquium

Danielle Wenner, Carnegie Mellon University
The Social Value of Research-Generated Knowledge: Re-Imagining the Responsiveness Requirement for International Clinical Research
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract" Clinical research conducted in lower- and middle-income countries (LMICs) is playing an ever-expanding role in the research and development of new biomedical interventions. Given large disparities in wealth and access to healthcare services between higher and lower income settings, research ethicists have greater concerns about exploitation in clinical research which is conducted in LMICs than that which is conducted in high-income settings. Moreover, there are legitimate concerns about what is called the 10/90 gap: somewhere around 90% of global research resources are devoted to research and development of interventions targeting the healthcare needs and desires of the wealthiest 10% of the global population. Put another way: only around 10% of global health research resources are devoted to addressing the health deficits affecting 90% of the world’s population. Such concerns have led multiple international bodies and domestic advisory groups to propose specific constraints on research conducted in LMIC settings, with one common recommendation being that research conducted in LMICs but externally or jointly sponsored ought to be responsive to host community health needs.

Although responsiveness is an oft-repeated ethical requirement for international research, there exists ongoing disagreement about both the content of responsiveness as well as its usefulness as a guideline governing international clinical research. In this paper, I propose a framework intended to clarify the responsiveness requirement. I begin by motivating the paper with a couple of examples and presenting some of the shortcomings of existing interpretations of responsiveness. I suggest that one helpful way of characterizing the normative content of the requirement is as a demand that the knowledge sought in clinical research be socially valuable to those populations within which, and upon whom, such research is conducted. I then borrow from decision theory a framework for the assessment of the value of information, and go on to outline how this approach can be utilized in the prospective assessment of a clinical trial’s responsiveness to host community needs. I consider what data would be necessary as inputs to fully operationalize this framework, in the process demonstrating how it avoids each of the objections raised to competing conceptions of responsiveness. Next, I discuss some of the hurdles to the full operationalization of the framework, and indicate how it can nevertheless operate as a threshold condition for the ethical permissibility of research conducted in LMICs. Finally, I conclude by briefly indicating some other issues in research ethics upon which this approach may shed light.

Thursday, January 29, 2015, Philosophy Colloquium

Kun Zhang, Max Planck Institute for Intelligent Systems
Hunting Causal Asymmetry and Using It: Some Methodological Developments and Their Foundations
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: In this talk I report some methodological developments on automated causal discovery and on using causal information to solve certain machine learning problems, which make use of different types of "independence.” I will begin with a quick review of the traditional conditional-independence-based approach to causal discovery, which, combined with our kernel-based conditional independence test, is able to find causal information in complex systems. I will then introduce a more recent approach based on an independence noise condition and appropriate structural constraints of the causal model, with a particular focus on the so-called post-nonlinear causal model, which is especially powerful in determining the causal direction between two variables. I suggest that this approach, like the conditional-independence-based one, is also closely related to Daniel Hausman's "independence" account of causal asymmetry. Finally, I will discuss the problem of domain adaptation in machine learning from a causal perspective, and show how a certain type of independence property implied by causal asymmetry inspires novel and general ways to understand and tackle the problem.

Monday, February 2, 2015, Philosophy Colloquium

Ariella Binik, Oxford University
The Ethics of Risk in Research with Children
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: Clinical research that examines the safe and effective treatment of diseases, disorders, and conditions affecting children offers one of the best prospects for improving the medical treatment of children. But the inclusion of children in research raises difficult ethical questions, among them: To how much risk should we expose children who cannot provide informed consent for their research participation? Most ethicists agree that children may be exposed to some research risks purely in the interests of obtaining medical knowledge that aims to benefit future generations. But the degree of risk that should be permitted and the reasons for which it should be permitted are controversial.

Various thresholds have been proposed to constrain research risks that do not offer children the prospect of direct medical benefit. These proposals include limiting research risks to (1) the risks of routine medical examinations (CIOMS 2002; Kopelman 2004), (2) the risks of participation in charitable activities (Wendler 2010), (3) the risks of family life (Ackerman 1980; Nelson and Ross 2005), and (4) the risks-of-daily-life (Freedman, Fuks, and Weijer 1993; McMillan and Hope 2004). I examine which, if any, of these thresholds is defensible. I argue that the risks-of-daily-life threshold is defensible, but not for the reasons currently offered. I raise a problem with the current justification of the risks-of-daily-life threshold, and I propose a new justification. I argue that the risks of daily life are justifiable because they are part of a reasonable trade-off between personal safety and our ability to pursue meaningful lives.

Thursday, February 5, 2015, Philosophy Colloquium

Igor Yanovich, Univeristy of Tubingen
New frontiers in modality
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: The usual synchronic semantics is concerned with the meanings of linguistic expressions at a particular time, often the present. Historical semantics studies meaning change and its regularities. I present three stories about modality that combine the synchronic and historical angles, using data from historical and modern Germanic and Slavic languages (including English, Low and High German, Ukrainian, Russian, Bulgarian, Bosnian-Serbian-Croatian, Czech and Polish). Each of the three stories feature: (i) an unusual, previously unknown synchronic semantics for the modal; (ii) a new descriptive historical pattern that is puzzling; and (iii) an explanation for the puzzle based on the analysis of pragmatic interactions between speaker agents. The three stories thus illustrate a new paradigm in semantic research, in which the very meanings of modal terms are subject to negotiation and eventual change, and agents' epistemic uncertainty as to their semantics plays a crucial explanatory role.

Thursday, February 12, 2015, Philosophy Colloquium

Nina Gierasimczuk, ILLC, University of Amsterdam
On topological logic for learning
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: The recently strengthened connections between formal learning theory and general topology have numerous implications for modal logic and its epistemic interpretation. In this talk I will present the main ideas behind those links and show how they can lead to an adequate topo-logic of inductive inference.

Thursday, February 19, 2015, Pure and Applied Logic Colloquium

Ryota Akiyoshi,, Kyoto University
Brouwer’s Argument of the Bar Induction Revisited
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: In a series of papers, Brouwer had developed intuitionism based on his conception of set. A key theorem in intuitionistic mathematics called the fan theorem was derived by a stronger theorem called the bar induction, which is an induction principle on a well-founded tree.

Let us formulate the bar induction. Let B be the set of “barred” nodes. Roughly speaking, B is the set of nodes in the tree at which the tree is well-founded. Suppose that P is a property (or a predicate).

Bar Induction (BI): Assume that
A1 ∀αx(α(x) ∈ B),
A2 ∀ny(nBn ∗ 〈y〉 ∈ B),
A3 ∀n(nBnP),
A4 ∀n(∀y(n ∗ 〈y〉 ∈ P) → nP).
Then 〈 〉 ∈ P.

Brouwer’s argument in 1927 aimed to give a constructive justification of this theorem. His argument was based on the BHK-reading of the statement of BI. Brouwer’s argument has been controversial and received different evaluations because it depends on an assumption saying that any proof of (A1) consists of only few elementary inference rules. For example, van Atten and Sundholm regard this assumption as a transcendental requirement in the sense of Kant. Another researchers, especially logicians (van Dalen, Kleene, Troelstra), have believed that the assumption is not mathematical, hence Brouwer’s argument is not justifiable mathematically. In this tradition, BI is not proved but postulated as an axiom.

In this talk, we present an approach to understanding Brouwer’s argument via a tool called the Ω-rule in infinitary proof theory. The Ω-rule was introduced by Buchholz in 1970’s for ordinal analysis of iterated inductive definitions and subsystems of second-order arithmetic. We compare Buchholz’s embedding of the induction axiom in ID1 with Brouwer’s argument for BI and claim that the embedding via the Ω-rule is really close to Brouwer’s argument. According to this interpretation, Brouwer’s assumption is a quite natural mathematical restriction on proofs for the quantification over proofs to work. This implies that Brouwer’s argument should be a mathematically well-motivated argument. As in the Ω-rule, his argument would contain a vicious circle without the assumption.

Thursday, February 26, 2015, Philosophy Colloquium

Tamar Lando, Columbia University
Modal logic, measure semantics, and pointless space
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: Long before Kripke semantics for modal logics became standard, Tarski showed us that the basic propositional modal language can be interpreted in topological spaces. In Tarski's semantics for the modal logic S4, each formula is evaluated to a subset of a fixed topological space. I develop a closely related, probabilistic (or measure-based) semantics for modal logics, in which modal formulas are interpreted in the Lebesgue measure algebra. I'll discuss some completeness results I've obtained for this semantics, and show how I think we can use the formal structures involved to understand what physical space might be like, if space is devoid of 'points.'

Monday, March 2, 2015, Center for Ethics and Policy Lecture

Michelle Meyer, Union Graduate College-Icahn School of Medicine, Mount Sinai Bioethics
Two Cheers for (Some) Nonconsensual Corporate Experimentation
Reception: 4:00 DH 4301, Talk: 4:30-6:00 Wean Hall 5415

Abstract: Reaction to the now infamous Facebook-Cornell “mood contagion” experiment was swift and fierce. Criticism by both the public and some prominent ethicists centered on the fact that user-subjects had not consented to participate. But discussion paid scant attention to the experiment’s relationship to Facebook’s underlying practice and its risks. Prior academic studies (most of them small and observational) had suggested two contradictory hypotheses about the mental health risks of Facebook use to its 1.35 billion users: that exposure to friends’ positive posts is psychologically risky (through a social comparison mechanism) and that exposure to negative posts is psychologically risky (through an emotional contagion mechanism). The company alone was in a position to rigorously determine the effects of its product through experimental mechanisms. But the kind of explicit, fully informed consent that we normally demand that researchers (and, less often, clinicians) obtain would have badly biased the results. Not since the Tuskegee study, the 1972 revelation of which served as the primary catalyst for the current ethical and legal framework for governing human subjects research, has the public expressed so much sustained alarm over human subjects research. Moreover, Facebook’s conundrum shares many features faced by practitioners and administrators working in modern healthcare systems. The (comparative) effects on patients of many medical and healthcare delivery practices are uncertain, imperiling patient welfare and potentially squandering scarce resources. Healthcare systems are in a unique position to rigorously field test the consequences of their services, yet obtaining explicit informed consent for participation in learning activities (whether “research” or QI/QA) is often infeasible. How we frame the Facebook experiment thus has consequences for other important research. In this talk, I will argue that criticisms of the Facebook experiment — that the company exploited its position of power over users, treated them as mere means to corporate ends, and deprived them of information necessary for them to make a considered judgment about what was in their best interests — should be inverted: Those in control of large systems affecting numerous people (like Facebook and healthcare system administrators) may abuse their power, treat patients and users as mere means to their ends, and deprive those parties of information necessary to exercise their autonomy when they fail to collect data on the effects of their products or services, giving rise in some cases to an ethical duty to experiment, sometimes without fully informed (or any) consent.

Tuesday, March 3, Thursday, March 5 & Friday, March 6, 2015

Ernest Nagel Lectures in Philosophy and Science


Thursday, March 26, 2015, Philosophy Colloquium

Carole Lee, University of Washington
Commensuration Bias in Peer Review
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: To arrive at their final evaluation of a manuscript or grant proposal, reviewers must convert a submission's strengths and weaknesses for heterogeneous peer review criteria into a single metric of quality. I identify this process of commensuration as the locus for a new kind of peer review bias. The concept of commensuration bias (i) illuminates how the systematic prioritization of some peer review criteria over others permits and facilitates problematic patterns of publication and funding in science and (ii) foregrounds a range of structural strategies for improving peer review processes and outcomes.

Thursday, April 2, 2015, Pure and Applied Logic Colloquium

Michael Rathjen, University of Leeds
Strong type theories: their set and proof-theoretic sides
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: There is tight fit between type theories à la Martin-Löf and constructive set theories such as CZF and its extension as well as classical Kripke-Platek set theory and extensions thereof. Moreover, the technology for determining their (exact) proof-theoretic strength was developed in the 1990s. The situation is rather different when it comes to type theories (with universes) having the impredicative type of propositions Prop from the Calculus of Constructions that features in some powerful proof assistants.

Aczel's sets-as-types interpretation into these type theories gives rise to rather unusual set-theoretic axioms: negative power set and negative separation. But it is not known how to determine the proof-theoretic strengths of intuitionistic set theories with such axioms via familiar classical set theories (though it is not difficult to see that ZFC plus infinitely many inaccessibles provides an upper bound). The first part of the talk will be a survey of known results from this area. The second part will be concerned with the rather special proof-theoretic behavior of such theories.

Thursday, April 9, 2015, Philosophy Colloquium

Jean-Pierre Marquis, University of Montreal
What is abstract about abstract homotopy theory?
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: As far as I know, the expression << abstract homotopy theory >> was used for the first time by Daniel Kan in a series of four short papers in the 1950's. Peter Freyd wrote that homotopy theory was not concrete in the 1960's, with a different meaning. It was used again later by other others, for instance by Quillen, Kenneth Brown, again with a different meaning. In this talk, I will look at these various usage of the expression and connect these with homotopy type theory.

Wednesday, April 15, 2015, Philosophy Colloquium

Mark Addis, Birmingham City University
Automatically Generating Scientific Theories
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: In the last few decades improvements in computational methods have enabled the use of computers to facilitate semi-automatic or automatic scientific discovery. A new way of producing theories is through their automatic generation using genetic programming optimisation methods. Theories, such as those in cognitive science, can be represented as genetic programs and modified as programs evolve. Some implications of this employment of genetic programming for philosophical perspectives on explanation and probabilistic inference, and the difference between natural and social science will be considered.

Thursday, April 23, 2015, Philosophy Colloquium

Ofra Magidor, University of Oxford
Conditional Acceptance
Reception: 4:00 DH 4301, Talk: 4:30-6:00 BH A53

Abstract: I present a new challenge for the semantics of indicative conditionals, and the related question of when we should accept conditional claims. The challenge takes the following form: first, I present a case involving an utterance of a certain indicative conditional C. I then proceed to argue that at least each of three prominent theories of conditionals predict that the in this case, you should assign C a high credence, but that this prediction is wrong: given the case, it is entirely permissible for you to assign C a low credence. Finally I discuss what conclusions draw from the argument both concerning the semantics of conditionals, but also to epistemology more generally.

Monday, May 4, 2015, Philosophy Colloquium

Juan Dubra, Universidad de Montevideo and NYU
A Theory of Rational Attitude Polarization
Reception: 4:00 DH 4301, Talk: 4:30-6:00 Location BH A53

Abstract: Numerous experiments have demonstrated the possibility of attitude polarization. For instance, Lord, Ross & Lepper (1979) partitioned subjects into two groups, according to whether or not they believed the death penalty had a deterrent effect, and presented them with a set of studies on the issue. Believers and skeptics both become more convinced of their initial views; that is, the population polarized. Many scholars have concluded that attitude polarization shows that people process information in a biased manner. We argue that not only is attitude polarization consistent with an unbiased evaluation of evidence, it is to be expected in many circumstances where it arises. At the same time, some experiments do not find polarization, under the conditions in which our theory predicts the absence of polarization.