Carnegie Mellon University

NASSLLI 2018 @ CMU - June 23-29

North American Summer School on Logic, Language, and Information

NASSLLI 2018 logo

Logic and Epistemology

This course is an introduction to topology and an exploration of some of its applications in epistemic logic. Some basic background in modal logic will be helpful, but is not essential; no background in topology is assumed.

We begin by motivating and reviewing the standard relational structures used as models for knowledge in epistemic logics. We then develop the notion of a topological space using a variety of metaphors and intuitions, and introduce topological semantics for the basic modal language. Intuitively, the spatial notion of "nearness" can be co-opted as a means of representing uncertainty. We investigate the relationship between topological semantics and the more standard relational semantics, establish the foundational result that S4 is "the logic of space" (i.e., sound and complete with respect to the class of all topological spaces), and discuss richer epistemic systems in which topology can be used to capture the distinction between the known and the knowable, fact and measurement.

Course materials

Logic and probability are related in many important ways: probability can in some sense be seen as a generalization of logic; many logical systems have natural probabilistic semantics; logic can be used to reason explicitly about probability; and logic and probability can be combined into systems that capitalize on the advantages of each. In this course we will present and discuss some of the most important ideas and results covering these various contacts between logic and probability, including (1) the view of probability as extended logic, foundations based on comparative probability, (2) probabilities defined on expressive logical languages, (3) default reasoning, acceptance rules, and the quantitative/qualitative interface, (4) statistical relational models and foundations of probabilistic programming, (5) zero-one laws, random structures, and almost-sure theories. Along the way we will be drawing connections to research topics in philosophy, computer science, mathematics, linguistics, statistics, and cognitive psychology.

While no specific background will be assumed, we will expect that students have experience in logic (including basic metatheory such as completeness and compactness) and some previous acquaintance with probabilistic reasoning. Specific tools from model theory, measure theory, and other areas that may be less familiar will be introduced as needed. Graduate students working in any of the fields mentioned above ought to be suitably prepared for the course.

Course materials

Epistemic logic is the logic of knowledge: how do you reason about what you know and what others know? This logic appears to be crucial in describing negotiations in economics, parallel processors in computer science, and multi-agent systems in artificial intelligence. Epistemic logic is also philosophically and technically interesting: it has beautiful semantics. The lectures will deal with the following subjects: axiomatic systems and Kripke semantics for knowledge of multiple agents, beliefs, distributed knowledge, general knowledge and common knowledge, knowledge-based communication protocols.

One of the main questions of the course will be in which way epistemic logic is an idealization, and how people actually reason about their own and other people's knowledge and beliefs, both in story situations where different participants have different perspectives as well as in competitive games and negotiations.

We will report on some experiments about social cognition, with animals, children and adults, and we will discuss how logic and formal philosophy can be useful as a guide for empirical studies and computational cognitive modeling.

As prerequisite, knowledge of propositional logic and modal logic should suffice.

Course materials

Multi-agent system is a generic concept encompassing a diverse range of phenomena in the modern society, such as  computer or social networks, robotic teams, markets, etc. Conceptually, a multi-agent system (MAS) involves several 'agents' that interact autonomously and intelligently with the environment and with each other, by exchanging information, planning and executing actions and strategies in pursuit of individual or group goals.

Logical approaches and methods are becoming increasingly popular and important in the modeling and analysis of MAS and a variety of rich logical frameworks have been introduced for capturing various aspects of MAS, incl. knowledge,   communication, and strategic reasoning.

This course is intended for a broad audience with basic knowledge of classical and modal logics. I will first introduce and discuss some of the most important and popular families of logics for reasoning about knowledge in multi-agent systems (MAS), including multi-agent epistemic and dynamic epistemic logics. Then it will focus on a variety of logics for strategic reasoning in MAS: with complete and with incomplete or imperfect information; with bounded or perfect memory of players; with dynamically changing strategy contexts; constructive strategies, etc. Finally, I will present a logical framework for capturing combined qualitative-quantitative reasoning will discuss some applications to multi-player games.

The emphasis of the course will be on conceptual understanding of the languages and semantics of these logics, and on applying them to model and specify properties of MAS. I will illustrate these with several popular scenarios, including some epistemic puzzles and multi-player games. The main logical decision problems, such as algorithmic model checking and validity/satisfiability testing and some related technical results for these, will be mentioned but not discussed in any details.

Course materials

The nature of ‘knowledge’ and what makes it different from mere belief, is what drives most studies in epistemology. Recent advances in the area show how logic can shed new light on a number of important epistemological questions including the formalization of classical epistemological conceptions; the treatment of epistemic paradoxes; the issue of tracking the truth by individual belief revision; and the successes and failures of group knowledge and belief aggregation.

We focus on formal approaches to knowledge representation, belief revision and interactive learning. We present, compare and relate various models for belief and knowledge, using methods developed in a number of fields ranging from Belief Revision Theory, Epistemic Logic, Formal Learning Theory and Game Theory. In contrast to more traditional approaches to epistemic logic, our presentation is not confined to Kripke models, but covers in addition more sophisticated representations of information: evidence models (based on topological-neighborhood semantics), models for justification and arguments (based on awareness structures and justification logic), as well as probabilistic models.

This course is addressed to students and researchers interested in how logic can be put to use for the study of epistemological questions. We presuppose some elementary background knowledge in Logic, in particular propositional logic and the basics of first-order logic. Some familiarity with the syntax and semantics of modal logic would be helpful, though not required. More importantly, we assume that participants possess some degree of mathematical maturity (as can be expected from graduate students) and a live interest in philosophical and interdisciplinary applications.

Course materials

Epistemic logic is a major field of philosophical logic studying reasoning patterns about knowledge. Despite its various applications in epistemology, theoretical computer science, AI, and game theory, the technical developments in the field have been mainly focusing on the propositional part, i.e., the propositional modal logics of "knowing that". However, knowledge is also expressed in natural language by "knowing whether", "knowing what", "knowing how", "knowing why" and so on (know-wh hereafter). Recent years witnessed a growing interest in non-standard epistemic logics of know-wh motivated by questions in philosophy, AI, and linguistics. Inspired by linguistic discussions on the semantics of questions, the new epistemic modalities introduced in those logics often share, in their formal semantics, the general schema of ‘exists x K phi’ (where K is the knowledge modality). The axioms of those logics intuitively capture the essential interactions of know-that and other know-wh operators, and the resulting logics are decidable.

In this course, I will survey the recent developments of this new research program on epistemic logics of know-wh and its various connections with existing logics and philosophical/AI/linguistic questions in the literature. Inspired by those logics, we will also discuss a very general and powerful framework based on a predicate language extended by new modalities which pack a quantifier and a modality together. We show that the resulting logic, though more expressive, shares many good properties of the basic propositional modal logic, such as the finite-tree-model property. This may also pave a new way to the discovery of new decidable fragments of first-order modal logic.

Course materials

Logic and Computation

Machine learning has transformed the way machines interact with data and make decisions. It is pervasive in artificial intelligence and related fields such as computational linguistics. The course will consist of two components: lecture-style instruction introducing students to the basics of machine learning and an interactive lab where students collaborate to develop improved learning and optimization methods for large scale classification. The lectures will provide an introduction to machine learning (supervised learning, classification, regression, ERM) and some basic models (linear regression, logistic regression, regularization). Time permitting, we will also cover the basics of neural networks (stochastic gradient descent, neural networks, backpropagation). The material will investigate both the theoretical underpinnings that explain why a machine is able to learn at all, as well as the tricks that enable a practitioner to effectively apply machine learning to their domain of interest. In the lab, students will learn about the efficient implementation of online learning (i.e. stochastic gradient descent) through a choose-your-own-adventure style exercise. The task is language identification; two datasets are provided. A very basic implementation of streaming multinomial logistic regression is improved to near state-of-the-art performance.

Course materials

Statistical inference from the observed to the unobserved always involves a degree of uncertainty. Information theory is a mathematical tool that meaningfully quantifies this uncertainty and thus expresses the limits of what one can hope to learn from noisy data. By looking at digital communication through this probabilistic lens, we can gain important new insights into questions related to digital storage, compression, and transmission of data, especially into the inherent limits on the speed of reliable communication in the presence of noise. These results in turn have interesting implications for the way we understand human learning and communication more generally. This course will provide an intuitive introduction to the mathematical foundations of information theory and point in the direction of some of their many applications in gambling, code-breaking, data compression, and epistemology.

Course materials

In the 1930s, in the midst of the foundational crisis of mathematics, David Hilbert introduced his metamathematical program, aiming to reduce mathematics to a formal axiomatic system whose consistency could be proven using 'finitary' methods. Gödel's incompleteness theorems famously confirmed that Hilbert's program, in its strictest sense, is impossible to achieve. Less recognized are Gödel's attempts to overcome his own obstacle and tackle Hilbert's program in broader terms. Over the following decades, Gödel developed his Dialectica interpretation, which reduced Peano arithmetic to a quantifier-free system of higher order functionals, System T, allowing the consistency of arithmetic to follow from that of T.

During the 1950s, Kreisel observed that proof interpretations like Gödel's could also be viewed from another angle: as tools for extracting concrete witnesses from existential statements. In the following years these insights were developed further, and from the 1990s onwards the formal extraction of programs from proofs truly took off thanks to the 'proof mining' program led by Kohlenbach. As a result of this fascinating re-orientation of proof theory, there has been a resurgence of interest in proof interpretations in the last 20-30 years, inspiring research from a range of different perspectives.

The aim of this course is to provide an introduction to proof interpretations up to the point where some of their many and varied roles in modern research can be understood and appreciated. We will assume a basic familiarity with the proof theory of first-order logic, but intend to frame the course so that it is accessible and interesting for  mathematicians, philosophers, linguists and computer scientists alike.

Course materials

This course introduces hybrid logic, a form of modal logic in which it is possible to name worlds - or times, or computational states, or situations, or nodes in parse trees, or people - or indeed, whatever it is that the elements of Kripke models are taken to represent. The course has three major goals. The first is to convey, as clearly as possible, the ideas and intuitions that have guided the development of hybrid logic. The second is to teach something about hybrid deduction and its completeness theory, and to make clear the crucial role played by the basic hybrid language and the Henkin construction. The third is to give you a glimpse of more powerful hybrid systems beyond the basic language, notably languages using the downarrow binder, and first- and higher-order hybrid logic. Along the way I will draw attention to the historical development of the subject, with particular emphasis on how hybrid logic fitted in (and didn’t fit in) with the ideas of its inventor, Arthur Prior.

Here is the lecture plan:

Lecture 1: From modal logic to hybrid logic
Lecture 2: Hybrid deductions
Lecture 3: X marks the spot (or: living locally with downarrow)
Lecture 4: Going first-order and higher
Lecture 5: And Prior to all that...

I won’t be presuming any particular background in hybrid (or indeed, modal) logic, but I will be assuming a certain logical maturity.

Some background reading:

  • “Representation, Reasoning, and Relational Structures: a Hybrid Logic Manifesto”, by Patrick Blackburn, Logic Journal of the IGPL, 8(3), 339-625, 2000. (An overview of hybrid logic, which should convey something of the flavor of this course.)
  • First-Order Modal Logic, by Melvin Fitting and Richard Mendelsohn, Springer, 1998. (One of the clearest introductions to modal logic, both propositional and first-order, around. An added bonus for this course is its coverage of tableaux systems. Highly recommended.)
  • “Contextual Validity in Hybrid Logic”, by Patrick Blackburn and Klaus Frovin Jørgensen., Proceedings of CONTEXT 2013, Lecture Notes in Artificial Intelligence (LNAI) 8175, pages 185-198, 2013. (An example of how hybrid logic can be put to work: here, to analyze temporal indexicals.)

Readers with more technical background may enjoy the following:

  • “Hybrid Logic”, Section 7.4, pages 434-445 of Modal Logic, by Patrick Blackburn, Maarten de Rijke and Yde Venema. Cambridge Tracts in Theoretical Computer Science, 53, Cambridge University Press, 2001 (Covers basic completeness theory for the basic language, emphasizing the Henkin construction and the role of pure formulas.)
  • “Hybrid Logic”, by Carlos Areces and Balder ten Cate, Handbook of Modal Logic, edited by Blackburn, van Benthem and Wolter, 2007, pages 821-868, Elsevier. (An advanced introduction to the topic by two leading contributors to the field.)
  • “Henkin and Hybrid Logic”, Patrick Blackburn, Antonia Huertas, Maria Manzano, and Klaus Frovin Jørgensen, in: M. Manzano, I. Sain and E. Alonso (eds.), The Life and Work of Leon Henkin: Essays on His Contributions (Studies in Universal Logic), 279–306, Birkhäuser, 2014. (Using a hybrid logic style Henkin construction in the setting of the Church-Henkin theory of simple types.)

For more on hybrid logic’s Priorean background, try some of these:

  • Papers on Time and Tense by Arthur N. Prior. New edition edited by Per Hasle, Peter Øhrstrøm, Torben Brauner, and Jack Copeland, Oxford University Press, 2003. (This collection, originally published in 1968, is the key source for Prior’s writings on hybrid logic. Try looking at “Tense Logic and the Logic of Earlier and Later”, ‘‘Quasi-Propositions and Quasi-Individuals”, and “Egocentric Logic”.)
  • “Arthur Prior and ‘Now’”, by Patrick Blackburn and Klaus Frovin Jørgensen, Synthese, Vol. 193: 11, 3665-3676, 2016. (Hybrid logic was something of a double-edged sword for Arthur Prior; this paper reconstructs a particularly fascinating episode in his love-hate relationship for the hybrid logic he pioneered.)

Course materials

This is a course on surface reasoning in natural language. The overall goal is to study logical systems which are closer to natural language than to first-order logic. Most of the systems are complete and decidable. Although the class will present a lot of technical material, most of the arguments are elementary. One needs to be comfortable with proofs as in ordinary mathematics, but only a small logic background is technically needed to follow the course.

Specific topics include: extended syllogistic logics; logics including verbs, relative clauses, and relative size quantifiers; the limits of syllogistic logic, monotonicity calculi; algorithms, complexity, and computer implementations.

The course will have a small amount of daily homework to help people learn. It also may involve running computer programs which carry out proof search and model building. These programs will be Jupyter notebooks that should be usable with a minimum of set-up.

The topic of natural logic lends itself to philosophical reflections on the nature of semantics. It has connections to classic topics in philosophy and to contemporary work in natural language processing. For NASSLLI students with a logic background, the course will offer many completeness theorems and connections to topics such as: fragments of first-order logic, combinatorics, and the typed lambda calculus. The class will suggest many active research areas to semanticists, logicians, and computer scientists.

Course materials

This course will provide an overview of the techniques and applications of sequent systems. We will begin in Lecture 1 with the basics of sequent systems and sketch a proof of the central result in the area, the Cut Elimination Theorem, and go over some of its consequences for classical and intuitionistic logic.

In Lecture 2, we motivative some non-classical systems obtained by dropping some of the basic structural rules. We will look at some substructural logics of interest to philosophers (relevant logics, non-contractive logics), computer scientists (linear logic), and linguists (Lambek calculus).

Sequent systems are flexible, but there are some surprising hurdles in giving adequate sequent systems for many modal logics, including familiar logics such as S5. In Lecture 3, we provide an overview of these problems and then turn to three generalizations of the basic sequent structure that get around these problems.

For Lecture 4, we shift focus from propositional connectives to quantifiers, presenting rules for the classical existential and universal quantifiers. We present a natural generalization of the basic sequent structure that can be used to give a sequent system for free logics and discuss applications to debates in metaphysics and modal logic.

In the final lecture, we will discuss semantic features of sequent systems. We will present background on inferentialism, the view that rules give the meaning of logical vocabulary, and consider Arthur Prior's tonk objection. We close out the course by showing how sequent systems can be used to construct counter-models for invalid arguments, relating these constructions to bilateralism, a view about how sequents relate to the speech acts of assertion and denial.

This course will not presuppose much logic. Familiarity with classical propositional logic is advised. No prior exposure to sequent systems will be presupposed, and no prior background in non-classical logics or modal logics is necessary.

Course materials

Computational Linguistics

This bootcamp course provides a hands-on introduction to the fundamental aspects of statistical analysis of quantitative linguistic data using the open source statistical environment R. The course assumes no prior programming or statistics training. Students will build a level of confidence in using R that can lead to more advanced programming and statistics classes. Students will make use of corpus, psycholinguistic, and survey data. At the end of the course, students will be able to select and use appropriate quantitative methods to analyze linguistic phenomena with the help of R. More practically, students will be able to use and understand the R code provided and modify it for the purposes of their own research.

Course materials

*LIMITED AVAILABILITY*

There will be two sections of this course, but because of the nature of the course, enrollment will be limited to 20 students per section. Students will be admitted according to the date on which they indicated interest in this course during the registration process. Students who are admitted will receive an email from the organizers or instructors shortly before the beginning of NASSLLI.

Gries and Newman (2013) lists four main modes of using corpora, the last one of which involves having corpora on one's hard drive and using general-purpose programming languages to process, manipulate, and search files. This method offers the advantage of being more versatile and powerful than any ready-made corpus exploration software could ever provide. Its main hurdle has been the learning curve associated with picking up a programming language; with the popularization of Python, a rigorous yet beginner-friendly language, and ready-to-use tools offered up by widely adopted libraries such as NLTK (Natural Language ToolKit), this method has become more readily accessible for linguists looking to adopt corpus-linguistic methods in their research.

This course is a gentle introduction to corpus linguistics using Python and NLTK. Participants will learn (1) the very basics of the Python programming language, (2) foundations of text processing methods in Python, and (3) NLTK's corpus-processing functions and modules. They will also have an opportunity to work with a corpus of their choice. We will practice common corpus investigation techniques such as concordancing, frequency list compilation, collocation, and n-gram list comparison.

Prior knowledge of programming is not required. Participants are required to install necessary software to their own laptop, which they should bring to every class meeting. A non-mobile OS is required to run the software: Windows, OS X (Mac) and Linux are good choices, while iOS (iPads) and Chrome OS are not suitable. Software installation instructions will be emailed a few days before the beginning of the course.

Course materials

Pragmatics was once thought of as the wastebasket of linguistics: as the caricature went, phenomena that were too complex to handle in the semantics were pushed to the mushy pragmatics, where they were dispatched with hand-wavy just-so stories. Recent developments in cognitive science have led pragmatics to a new period of maturation, facilitated by two important factors: a) the novel application of mathematical modeling techniques, and b) access to rich experimental data. Advances in probabilistic and game-theoretic models that treat pragmatic inference as a problem of social reasoning under uncertainty have yielded testable quantitative predictions about the outcome of many different kinds of pragmatic inference. The phenomena that these types of models have been successfully applied to include scalar implicature, ad hoc Quantity implicatures, M-implicatures, gradable adjectives, and hyperbole, among many others (for a review, see Goodman & Frank, 2016).

The course will introduce students to models of pragmatics that employ probabilistic inference to explain both utterance interpretation and production choices for a variety of phenomena. The basics of fitting experimental data to probabilistic cognitive models will be explained on the basis of case studies of increasing complexity from the recent literature. Students will learn to modify and build their own computational models within the Rational Speech Act framework using the probabilistic programming language WebPPL.

Course materials

Computational morphology aims at developing formalisms and algorithms for the computational analysis and synthesis of word forms for use in language processing applications. The last couple of decades have seen a significant activity in developing techniques and tools and for developing morphological processors for a variety of languages. As natural language processing applications expand to lesser-studied languages many of which have complex word structures, it is evident that one of the first resources researchers for such languages should build are wide-coverage morphological processors dealing with all kinds of real-world engineering issues such as unknown-words, non-alphabetic tokens (with morphology!), proper-nouns, foreign words, etc.

This course will start with an overview of the basics of morphology and then discuss goals of, and approaches to, computational morphology. It will then present the formalism of finite state transducers and regular relations, the underlying technology for state-of-the-art morphological processing approaches.

We plan to cover both two-level morphology and cascaded-replace rule approach to morphographemics with numerous examples of lexicon structure/morphotactics and morphographemic alternation phenomena from a variety of languages. We then discuss issues in developing an industrial strength morphological processor for Turkish, a morphologically very-complex agglutinative language, presenting issues such as dealing with over-generation, foreign-words, numbers, acronyms, multi-words, etc.

The course will also review recent work on using machine learning techniques, ranging from learning to automatically segments word into morphemes, to being able to generate word forms with arbitrary morphological features --reinflection -- from a limited data about a language's morphology.

Course materials

This course provides an introduction into the Tree-Adjoining Grammar (TAG) formalism (Joshi and Schabes, 1997), in particular Lexicalized Tree-Adjoining Grammar (LTAG) together with grammar implementations and tools for parsing with TAG: TuLiPA and XMG. With this course we would like to introduce TAG and show its importance in language modeling.

During the course we will discuss syntactic and semantic analyses using LTAG for natural language phenomena such as long-distance dependencies, clausal complements and the analysis of raising and control, scope ambiguity and scrambling. Next to the theoretical work two applications will be discussed in details: the Tübingen Linguistic Parsing Architecture (TuLiPA; sourcesup.renater.fr/tulipa/) and the eXtensible MetaGrammar (XMG; http://xmg.phil.hhu.de/) illustrating how those TAG analyses can be implemented, tested and used.

The TAG formalism is one of the mildly context-sensitive grammar formalisms, widely used in modeling natural languages. The adequacy of language modeling with TAG is shown by a range of grammar implementations and tools (e.g. XTAG, TuLiPA, XMG). Its importance in computational linguistics is undoubtedly shown by intrinsic and successful research both on the theoretical and on the practical side: without seeking for a full listing of recent works, see e.g. linguistic analysis using TAG in Abeillé and Rambow (2000), the syntax-semantics interface by Kallmeyer and Joshi (2003), Kallmeyer and Romero (2008), discourse processing by Webber (2004), grammar formalisms by, e.g., Shieber and Schabes (2006), wide coverage grammar implementations by the XTAG project (XTAG Research Group, 2001) and tools for parsing with TAG: TuLiPA (Parmentier et al., 2008), XMG (Crabbé et al., 2013; Kallmeyer et al., 2016).

Course materials

State-of-the-art natural language processing (NLP) tools, such as text parsing, speech recognition and synthesis, text and speech translation, semantic analysis and inference, rely on availability of language-specific data resources. Such data resources are expensive, and these typically exist only for English and a few other geopolitically or economically important languages; these are called “resource-rich languages”. Most of the 7,000+ languages in the world—many with millions of speakers—are resource-poor from the standpoint of NLP.

To make NLP tools available in more languages, techniques have been developed to leverage prior linguistic knowledge and learn from just a few examples, or to project data resources from resource-rich languages using parallel (translated) data as a bridge. Yet, the challenge of building accurate NLP tools for low-resource languages remains, especially with the increasing access to information technologies from all over the world on the one hand, and the rise of data-hungry deep learning approaches on the other.

The goal of this course is to introduce the problem of doing NLP with limited data resources and to present the current state of the art in this area. We will present an overview of fundamental problems in low-resource NLP, of existing resources, and of key approaches that explore low-resource scenarios.

Course materials

Semantics and Pragmatics

This course will introduce the concepts and methods of formal semantics and pragmatics, and say something about their origins in philosophy and linguistics.

Sessions will deal with the following topics:

  1. Truth-Conditions and Compositionality
  2. Reference and Quantification
  3. Time and Modality
  4. Intensions and mental attitudes
  5. Context and Content
  6. Common Ground and Conversational Update
  7. Implicature and figurative speech

Course materials

There are two main ways of viewing human communication. Suppose that Fred promises Wilma to do the dishes by saying, “I’ll do the dishes.” One way of looking at Fred’s utterance is that it serves to convey Fred’s intention to do the dishes. On this construal, understanding speech acts requires that we take a psychological stance, for their purported purpose is to express what’s on the speaker’s mind. The other way is to adopt a social stance and view Fred’s promise as a means of modifying his relationship with Wilma: as a result of saying, “I’ll do the dishes”, Fred becomes responsible to Wilma for doing the dishes.

The opposition between psychological and social approaches to communication is the recurrent theme of this course, which focuses on three key topics in pragmatics: speech acts, common ground, and conversational implicatures. In each of these cases, psychological and social approaches will be compared, and ecumenical approaches considered. Among the topics discussed along the way will be presupposition, definite and indefinite reference, linguistic conventions, and the pragmatics of word meaning.

Course materials

This course is a focused examination of predicates of personal taste (PPTs) such as "tasty" and "fun"—the empirical discoveries, the theoretical landscape, their connection with subjective language. Recent work in formal semantics and philosophy of language has shown that the linguistic behavior of PPTs differs from that of other predicates (OPs) such as "round" and "popular". We will address the nature of the PPT-OP distinction through the following basic questions:

— Is the distinction categorical (as is often assumed in theoretical literature) or gradient (as subjectivity is treated in literature on sentiment analysis)?
— Are PPTs special because of the semantics, the pragmatics, or the epistemology and psychology of taste?
— Are other predicates involving judgment—aesthetic, moral, value—also PPTs? What is a reliable diagnostic across conceptual domains?

The course is structured in three parts. The first two days comprise a primer on the most-discussed empirical questions and will present a taxonomy of existing theories. Specifically, we will talk about truth-evaluabillity of PPTs, their conversational behavior, and normativity that differentiate PPTs from other expressions used to describe perception.

The next part focuses on less-studied puzzles. Day 3 will discuss overt tasters, introduced by "for" and "to", and multiple perspectives made available in questions and attitude reports, but constrained within one sentence by general rules governing the interpretation of noun phrases. Day 4 will turn to the source of direct experience requirement associated with PPTs and its relation to evidentiality, as well as similar requirements imposed by psychological predicates and dispositional generics. We will also talk about the nature of predicates like "find" that ban OPs in their complements.

Finally, on Day 5, we will examine the cognitive science perspective on taste attribution. We will focus on the philosophical literature on perceptual attribution and personal epistemology, as well as on neurophysiological research on aesthetic judgment.

Course materials

It is sometimes believed that the marriage of (Neo-)Davidsonian event semantics and compositional semantics is an uneasy one. And indeed, in many implementations of event semantics, standard treatments of scope-taking elements such as quantifiers, negation, conjunctions, modals, the adverb 'only', etc. are complicated compared to the simple accounts they get in semantics textbooks. A typical graduate Semantics I course will introduce students to the main idea and motivation of event semantics, and will then go on to describe phenomena like quantification and negation in an event-free framework. While specialists who wish to combine the two frameworks will know where to look for ideas, there are currently no easy-to-use, off-the-shelf systems that puts the two together, textbook-style. An aspiring semanticist might be discouraged by this situation, particularly when a given language or phenomenon that seems to be well-suited to event semantics also involves scope-taking elements that need to be analyzed in some way. For example, event semantics is a natural choice for a fieldworker who wishes to sketch a semantic analysis of a language without making commitments as to the relative hierarchical order of arguments or the argument-adjunct distinction. Yet the same fieldworker would face significant technical challenges before being able to also use such standard tools as generalized quantifier theory or classical negation when encountering quantifiers and negation. This course aims to remedy this situation. After an overview of the basic empirical motivations for event semantics and for compositional semantics, we will introduce a novel implementation of event semantics that combines with standard treatments of scope-taking elements in a well-behaved way. We will follow by reviewing event-based analyses of alternative-sensitive elements such as 'only' by Bonomi & Casalegno and Beaver & Clark, pointing out their problems, and conclude by proposing an event-based analysis of such elements that does not run into the same problems. The lecture notes of a previous version of the course can be found at http://ling.auf.net/lingbuzz/002121.

Course materials

Discourses mean more than just the sentences they are composed of. A sequence of past tense sentences can, for instance, be interpreted as forward moving in time (narrative progression) or as backshifted. This course will explore the source of these temporal inferences: Are they conventionalized, or do they arise from (possibly domain general) principles of conversation? Does narrative progression have the same source as backshifting? Do lexical and grammatical expressions constrain the possibilities for these inferences, and if so how?

Over five days, students will be introduced to existing accounts for narrative progression and backshifting, as well as to related phenomena that have not been systematically investigated in the past. Students will also be exposed to a range of corpora annotated for discourse and temporal relations (e.g., PDTB, ANNODIS), using them to shed new light on these questions.

More specifically, the course will cover the basics of temporal and event structure, reference time (Partee 1984, Hinrichs 1986, Dowty 1986, Webber 1988) and discourse relational (Kehler 2002, Asher and Lascarides 2003, Altshuler 2016:§3) theories of narrative progression and backshifting, varied uses of tense and their interaction with these temporal inferences (Sharvit 2008, Anand and Toosarvandani, to appear), and the semantic and pragmatic underpinnings of narrative progression (Klein 2009, Cumming 2015, Altshuler 2016:§2).

Course materials

The central claim we plan on investigating is the well-known conjecture that the probabilities of conditionals are the conditional probabilities of the consequent given the antecedent. This claim (sometimes called simply ‘The Thesis’) is discussed extensively in classical papers and has generated recent work (e.g. Kaufmann, Bacon, Charlow) that is informed by mainstream theories in formal semantics. Some of the questions we plan on pursuing are:

i. What empirical/experimental evidence is there for the claim that probabilities of conditionals are conditional probabilities?
ii. To what extent, if at all, can standard truth conditional frameworks vindicate the Thesis?
iii. How do conditional semantics that are designed to vindicate the Thesis (e.g. van Fraassen’s, Kaufmann’s) compare to more traditional semantics for conditionals?
iv. Can expressivistic semantics for epistemic modals in the style of Yalcin, Swanson, Moss (and many others) help vindicate the Thesis?
v. What is the import of the foregoing for counterfactual conditionals?

One of our main goals is integrating methodologies from the philosophical logic and formal semantics literature, while also keeping an eye on relevant psychological research. We plan on presenting proofs in a way that is rigorous and accessible, while at the same time stating the relevant formal theories in frameworks that are fully integrated with contemporary research in semantics.

Course materials

Explorations

Probabilistic systems for machine translation, summarization, and other applications are effective at many things, but often fail to preserve the compositional meaning of language. To preserve meaning, they must model meaning. As fuel for such models, several corpora have recently been annotated with meaning representations in the form of directed acyclic graphs (DAGs). How can we model these graphs in a probabilistic system?

This course will introduce models of graphs that are being actively researched as possible answers to this question. Since trees and strings are often modeled with context-free and regular languages, we will survey context-free and regular models of graphs. We will cover context-free graph languages and show how they are modeled by hyperedge replacement grammar. We will cover regular graph languages and show how they are modeled by monadic second-order logic, and how this relates to the finite automata that are typically used to model regular string and tree languages. At each step we will highlight the strengths and weaknesses of these formalisms as models of meaning representations, and we will conclude with a discussion of models that inherit desirable properties of both context-free and regular graph languages.

Course materials

In the epistemology of science and inductive inference, the first formal tools that come to mind are logic and probability theory. But much of what is going on is fundamentally topological. The context of scientific inquiry specifies what kind of empirical information can be expected, which in turn generates a topology over possible worlds. Fundamental concepts such as verifiability and refutability from data, convergence to the truth, Ockham's razor, Hume's problem of induction, and Popper's problem of irrefutability are all straightforwardly topological, and many standard topological concepts have natural epistemological interpretation in the philosophy of science. Topology sheds new light on traditional problems in the philosophy and methodology of science. In this course, we will introduce topological learning theory and its applications in the philosophy of science. It will be explained how topological complexity captures the intrinsic difficulty of scientific inference problems, which opens the door to means-ends optimality arguments for the justification of scientific methods. Ockham’s razor, the characteristic scientific bias toward simpler, more testable, or more unified theories will be developed in detail as a case in point. Simplicity and Ockham’s razor will be defined topologically, and it will be shown, using topological methods, that Ockham’s razor finds the truth in the most direct possible manner—without circular appeal to a metaphysical assumption that nature is simple. Time permitting, we will also cover how topological learning theory applies in statistical and causal inference.

Course materials

Following the introduction of type-theoretic methods into natural language semantics (locus classicus Montague 1973), both formal natural language semantics and type theory have emerged as diverse fields of inquiry. Though simple type theory has maintained enduring popularity as a foundation for natural language semantics, further exchange of ideas between these two fields has historically been limited. However, recent years have renewed this exchange, in particular seeing the application to semantics of modern type-theoretic tools such as dependent types (from Ranta 1994), monads (from Barker 2002, Shan 2002), and comonads (from Awodey et al. 2015). This workshop will further the understanding of these tools and the development of their valuable linguistic applications.

Monads (respectively comonads) are a tool for seamlessly enriching semantic representations with extra outputs (respectively inputs). Such enrichments are integrated modularly on top of a core compositional semantics. These ‘extra outputs’ may naturally include modifications to discourse or common ground representations. This underlies the monadic approach to dynamic semantics (Charlow 2014). Monadic methods have furthermore been used for an influential analysis of quantifier scope ambiguities (Barker 2002). As for ‘extra inputs’, an important case is provided by intensional phenomena. These are modeled with semantic representations which take in ‘intensionalized’ inputs, and are thus susceptible to a comonadic analysis (Awodey et al. 2015, Zwanziger 2017).

Dependent types greatly enrich the expressive power of type theory, and have found diverse applications in semantics as a result. A key goal for the workshop is to identify and discuss common themes between these. The applications addressed in detail will include tracking anaphoric dependencies and presuppositions in discourse (Bekki 2014, 2017; Grudzinska, Zawadowski 2014, 2017), modeling lexical phenomena such as selectional restrictions and coercions (Luo 2012; Luo, Chatzikyriakidis 2017), and adjectival/adverbial modification (Luo, Chatzikyriakidis 2017). Finally, dependent types are implemented in proof assistants such as Agda, Coq, and Lean, providing a ready-made framework for computational semantics.

Program

Day 1 (Tutorials)
Welcome
Dependent Types Tutorial (30 min)—Zawadowski (Warsaw)
Formal Semantics in Modern Type Theories (MTTs) (30 min) — Chatzikyriakidis (Gothenburg)

Day 2 (Tutorials):
Intro. to Dependent Type Semantics (40 min)—Bekki (Ochanomisu)
Intro. to Monads and Comonads (30 min)—Awodey (CMU)

Day 3 (Tutorials):
Effectful Composition in Natural Language Semantics—Charlow (Rutgers) & Bumford (UCLA)

Day 4 (Talks):
Scope in Natural Language: Why Monads aren't Enough (40 min)—Barker (NYU)
Tracking Anaphors and Taking Scopes with Dependent Types (30 min)—Grudzinska (Warsaw)

Day 5 (Talks):
Intensionality is Comonadic (40 min) —Zwanziger (CMU)
Formal Semantics in MTTs: Playing Around with the Coq Proof Assistant (30 min)—Chatzikyriakidis (Gothenburg)

Invented languages (Conlangs) have been proposed to unify humanity, bring rigor to thought, and enhance fictional experiences. Conlangs are playing an increasing role in science fiction and fantasy, with increasingly higher standards for everything from phonetic inventory to pervasive cognitive metaphors. The bar has been raised in the entertainment industry such that conlangs are fully developed by linguists and are complete with reference grammars.

In this course, we will address conlangs as a form of creative expression for linguists. Students will create small conlangs focusing on research areas of their choice. The underlying insight is that languages carve up semantic and pragmatic spaces differently and grammaticalize them differently. In order to create a truly innovative language that is not isomorphic to an existing language, students will use their knowledge of the intricacies of a semantic/pragmatic phenomenon and how it is typically grammaticalized in real languages, and then remix it into a novel grammaticalization, perhaps creating a fictional speech community whose culture and cognition are reflected in the new language. Phenomena that can be chosen as foci for this course include referentiality, definiteness, modality, conditionals, quantification, negation, possession, information structure, speech acts, conventional metaphors, lexicalization, argument realization, comparison, and many others. Structural aspects of language (phonology, morphology, and syntax) are also fair game for creativity.

The format of this course will be interactive and hands-on. It should probably be limited to less than 15 students. Each student (or small group) will produce (1) a short grammar sketch, (2) a corpus of sentences with interlinear glosses, literal paraphrases, and English translations, and (3) a short narrative to perform.

Course materials

Presenting Sponsor:

Citi Ventures