Logic and Epistemology
Logic and Probability - Thomas Icard and Krzysztof Mierzewski
Logic and probability are related in many important ways: probability can in some sense be seen as a generalization of logic; many logical systems have natural probabilistic semantics; logic can be used to reason explicitly about probability; and logic and probability can be combined into systems that capitalize on the advantages of each. In this course we will present and discuss some of the most important ideas and results covering these various contacts between logic and probability, including (1) the view of probability as extended logic, foundations based on comparative probability, (2) probabilities defined on expressive logical languages, (3) default reasoning, acceptance rules, and the quantitative/qualitative interface, (4) statistical relational models and foundations of probabilistic programming, (5) zero-one laws, random structures, and almost-sure theories. Along the way we will be drawing connections to research topics in philosophy, computer science, mathematics, linguistics, statistics, and cognitive psychology.
While no specific background will be assumed, we will expect that students have experience in logic (including basic metatheory such as completeness and compactness) and some previous acquaintance with probabilistic reasoning. Specific tools from model theory, measure theory, and other areas that may be less familiar will be introduced as needed. Graduate students working in any of the fields mentioned above ought to be suitably prepared for the course.
Epistemic Logic - Rineke Verbrugge
Logics for Epistemic and Strategic Reasoning in Multi-Agent Systems - Valentin Goranko
Multi-agent system is a generic concept encompassing a diverse range of phenomena in the modern society, such as computer or social networks, robotic teams, markets, etc. Conceptually, a multi-agent system (MAS) involves several 'agents' that interact autonomously and intelligently with the environment and with each other, by exchanging information, planning and executing actions and strategies in pursuit of individual or group goals.
Logical approaches and methods are becoming increasingly popular and important in the modeling and analysis of MAS and a variety of rich logical frameworks have been introduced for capturing various aspects of MAS, incl. knowledge, communication, and strategic reasoning.
This course is intended for a broad audience with basic knowledge of classical and modal logics. I will first introduce and discuss some of the most important and popular families of logics for reasoning about knowledge in multi-agent systems (MAS), including multi-agent epistemic and dynamic epistemic logics. Then it will focus on a variety of logics for strategic reasoning in MAS: with complete and with incomplete or imperfect information; with bounded or perfect memory of players; with dynamically changing strategy contexts; constructive strategies, etc. Finally, I will present a logical framework for capturing combined qualitative-quantitative reasoning will discuss some applications to multi-player games.
The emphasis of the course will be on conceptual understanding of the languages and semantics of these logics, and on applying them to model and specify properties of MAS. I will illustrate these with several popular scenarios, including some epistemic puzzles and multi-player games. The main logical decision problems, such as algorithmic model checking and validity/satisfiability testing and some related technical results for these, will be mentioned but not discussed in any details.
Logics for Formal Epistemology - Alexandru Baltag and Sonja Smets
The nature of ‘knowledge’ and what makes it different from mere belief, is what drives most studies in epistemology. Recent advances in the area show how logic can shed new light on a number of important epistemological questions including the formalization of classical epistemological conceptions; the treatment of epistemic paradoxes; the issue of tracking the truth by individual belief revision; and the successes and failures of group knowledge and belief aggregation.
We focus on formal approaches to knowledge representation, belief revision and interactive learning. We present, compare and relate various models for belief and knowledge, using methods developed in a number of fields ranging from Belief Revision Theory, Epistemic Logic, Formal Learning Theory and Game Theory. In contrast to more traditional approaches to epistemic logic, our presentation is not confined to Kripke models, but covers in addition more sophisticated representations of information: evidence models (based on topological-neighborhood semantics), models for justification and arguments (based on awareness structures and justification logic), as well as probabilistic models.
This course is addressed to students and researchers interested in how logic can be put to use for the study of epistemological questions. We presuppose some elementary background knowledge in Logic, in particular propositional logic and the basics of first-order logic. Some familiarity with the syntax and semantics of modal logic would be helpful, though not required. More importantly, we assume that participants possess some degree of mathematical maturity (as can be expected from graduate students) and a live interest in philosophical and interdisciplinary applications.
Beyond “knowing that”: a new generation of epistemic logics - Yanjing Wang
Epistemic logic is a major field of philosophical logic studying reasoning patterns about knowledge. Despite its various applications in epistemology, theoretical computer science, AI, and game theory, the technical developments in the field have been mainly focusing on the propositional part, i.e., the propositional modal logics of "knowing that". However, knowledge is also expressed in natural language by "knowing whether", "knowing what", "knowing how", "knowing why" and so on (know-wh hereafter). Recent years witnessed a growing interest in non-standard epistemic logics of know-wh motivated by questions in philosophy, AI, and linguistics. Inspired by linguistic discussions on the semantics of questions, the new epistemic modalities introduced in those logics often share, in their formal semantics, the general schema of ‘exists x K phi’ (where K is the knowledge modality). The axioms of those logics intuitively capture the essential interactions of know-that and other know-wh operators, and the resulting logics are decidable.
In this course, I will survey the recent developments of this new research program on epistemic logics of know-wh and its various connections with existing logics and philosophical/AI/linguistic questions in the literature. Inspired by those logics, we will also discuss a very general and powerful framework based on a predicate language extended by new modalities which pack a quantifier and a modality together. We show that the resulting logic, though more expressive, shares many good properties of the basic propositional modal logic, such as the finite-tree-model property. This may also pave a new way to the discovery of new decidable fragments of first-order modal logic.
Logic and Computation
Machine Learning - Matt Gormley
Information Theory - Mathias Winther Madsen
Proof Interpretations: a modern perspective - Anupam Das, Thomas Powell
In the 1930s, in the midst of the foundational crisis of mathematics, David Hilbert introduced his metamathematical program, aiming to reduce mathematics to a formal axiomatic system whose consistency could be proven using 'finitary' methods. Gödel's incompleteness theorems famously confirmed that Hilbert's program, in its strictest sense, is impossible to achieve. Less recognized are Gödel's attempts to overcome his own obstacle and tackle Hilbert's program in broader terms. Over the following decades, Gödel developed his Dialectica interpretation, which reduced Peano arithmetic to a quantifier-free system of higher order functionals, System T, allowing the consistency of arithmetic to follow from that of T.
During the 1950s, Kreisel observed that proof interpretations like Gödel's could also be viewed from another angle: as tools for extracting concrete witnesses from existential statements. In the following years these insights were developed further, and from the 1990s onwards the formal extraction of programs from proofs truly took off thanks to the 'proof mining' program led by Kohlenbach. As a result of this fascinating re-orientation of proof theory, there has been a resurgence of interest in proof interpretations in the last 20-30 years, inspiring research from a range of different perspectives.
The aim of this course is to provide an introduction to proof interpretations up to the point where some of their many and
varied roles in modern research can be understood and appreciated. We will assume a basic familiarity with the proof theory of first-order logic, but intend to frame the course so that it is accessible and interesting for mathematicians, philosophers, linguists and computer scientists alike.
Hybrid Logic - Patrick Blackburn
Logic for natural language, logic in natural language - Larry Moss
This is a course on surface reasoning in natural language. The overall goal is to study logical systems which are closer to natural language than to first-order logic. Most of the systems are complete and decidable. Although the class will present a lot of technical material, most of the arguments are elementary. One needs to be comfortable with proofs as in ordinary mathematics, but only a small logic background is technically needed to follow the course.
Specific topics include: extended syllogistic logics; logics including verbs, relative clauses, and relative size quantifiers; the limits of syllogistic logic, monotonicity calculi; algorithms, complexity, and computer implementations.
The course will have a small amount of daily homework to help people learn. It also may involve running computer programs which carry out proof search and model building. These programs will be Jupyter notebooks that should be usable with a minimum of set-up.
The topic of natural logic lends itself to philosophical reflections on the nature of semantics. It has connections to classic topics in philosophy and to contemporary work in natural language processing. For NASSLLI students with a logic background, the course will offer many completeness theorems and connections to topics such as: fragments of first-order logic, combinatorics, and the typed lambda calculus. The class will suggest many active research areas to semanticists, logicians, and computer scientists.
Proof Theory: logical and philosophical aspects - Shawn Standefer and Greg Restall
This course will provide an overview of the techniques and applications of sequent systems. We will begin in Lecture 1 with the basics of sequent systems and sketch a proof of the central result in the area, the Cut Elimination Theorem, and go over some of its consequences for classical and intuitionistic logic.
In Lecture 2, we motivative some non-classical systems obtained by dropping some of the basic structural rules. We will look at some substructural logics of interest to philosophers (relevant logics, non-contractive logics), computer scientists (linear logic), and linguists (Lambek calculus).
Sequent systems are flexible, but there are some surprising hurdles in giving adequate sequent systems for many modal logics, including familiar logics such as S5. In Lecture 3, we provide an overview of these problems and then turn to three generalizations of the basic sequent structure that get around these problems.
For Lecture 4, we shift focus from propositional connectives to quantifiers, presenting rules for the classical existential and universal quantifiers. We present a natural generalization of the basic sequent structure that can be used to give a sequent system for free logics and discuss applications to debates in metaphysics and modal logic.
In the final lecture, we will discuss semantic features of sequent systems. We will present background on inferentialism, the view that rules give the meaning of logical vocabulary, and consider Arthur Prior's tonk objection. We close out the course by showing how sequent systems can be used to construct counter-models for invalid arguments, relating these constructions to bilateralism, a view about how sequents relate to the speech acts of assertion and denial.
This course will not presuppose much logic. Familiarity with classical propositional logic is advised. No prior exposure to sequent systems will be presupposed, and no prior background in non-classical logics or modal logics is necessary.
Introduction to linguistics data analysis using R - Seth Wiener
Corpus linguistics with python and NLTK - Na-Rae Han
Gries and Newman (2013) lists four main modes of using corpora, the last one of which involves having corpora on one's hard drive and using general-purpose programming languages to process, manipulate, and search files. This method offers the advantage of being more versatile and powerful than any ready-made corpus exploration software could ever provide. Its main hurdle has been the learning curve associated with picking up a programming language; with the popularization of Python, a rigorous yet beginner-friendly language, and ready-to-use tools offered up by widely adopted libraries such as NLTK (Natural Language ToolKit), this method has become more readily accessible for linguists looking to adopt corpus-linguistic methods in their research.
This course is a gentle introduction to corpus linguistics using Python and NLTK. Participants will learn (1) the very basics of the Python programming language, (2) foundations of text processing methods in Python, and (3) NLTK's corpus-processing functions and modules. They will also have an opportunity to work with a corpus of their choice. We will practice common corpus investigation techniques such as concordancing, frequency list compilation, collocation, and n-gram list comparison.
Prior knowledge of programming is not required. Participants are required to install necessary software to their own laptop, which they should bring to every class meeting. A non-mobile OS is required to run the software: Windows, OS X (Mac) and Linux are good choices, while iOS (iPads) and Chrome OS are not suitable. Software installation instructions will be emailed a few days before the beginning of the course.
Computational pragmatics - Judith Degen
Pragmatics was once thought of as the wastebasket of linguistics: as the caricature went, phenomena that were too complex to handle in the semantics were pushed to the mushy pragmatics, where they were dispatched with hand-wavy just-so stories. Recent developments in cognitive science have led pragmatics to a new period of maturation, facilitated by two important factors: a) the novel application of mathematical modeling techniques, and b) access to rich experimental data. Advances in probabilistic and game-theoretic models that treat pragmatic inference as a problem of social reasoning under uncertainty have yielded testable quantitative predictions about the outcome of many different kinds of pragmatic inference. The phenomena that these types of models have been successfully applied to include scalar implicature, ad hoc Quantity implicatures, M-implicatures, gradable adjectives, and hyperbole, among many others (for a review, see Goodman & Frank, 2016).
The course will introduce students to models of pragmatics that employ probabilistic inference to explain both utterance interpretation and production choices for a variety of phenomena. The basics of fitting experimental data to probabilistic cognitive models will be explained on the basis of case studies of increasing complexity from the recent literature. Students will learn to modify and build their own computational models within the Rational Speech Act framework using the probabilistic programming language WebPPL.
Computational morphology - Kemal Oflazer
Computational morphology aims at developing formalisms and algorithms for the computational analysis and synthesis of word forms for use in language processing applications. The last couple of decades have seen a significant activity in developing techniques and tools and for developing morphological processors for a variety of languages. As natural language processing applications expand to lesser-studied languages many of which have complex word structures, it is evident that one of the first resources researchers for such languages should build are wide-coverage morphological processors dealing with all kinds of real-world engineering issues such as unknown-words, non-alphabetic tokens (with morphology!), proper-nouns, foreign words, etc.
This course will start with an overview of the basics of morphology and then discuss goals of, and approaches to, computational morphology. It will then present the formalism of finite state transducers and regular relations, the underlying technology for state-of-the-art morphological processing approaches.
We plan to cover both two-level morphology and cascaded-replace rule approach to morphographemics with numerous examples of lexicon structure/morphotactics and morphographemic alternation phenomena from a variety of languages. We then discuss issues in developing an industrial strength morphological processor for Turkish, a morphologically very-complex agglutinative language, presenting issues such as dealing with over-generation, foreign-words, numbers, acronyms, multi-words, etc.
The course will also review recent work on using machine learning techniques, ranging from learning to automatically segments word into morphemes, to being able to generate word forms with arbitrary morphological features --reinflection -- from a limited data about a language's morphology.
Language modeling with tree-adjoining grammars - Kata Balogh and Simon Petitjean
This course provides an introduction into the Tree-Adjoining Grammar (TAG) formalism (Joshi and Schabes, 1997), in particular Lexicalized Tree-Adjoining Grammar (LTAG) together with grammar implementations and tools for parsing with TAG: TuLiPA and XMG. With this course we would like to introduce TAG and show its importance in language modeling.
During the course we will discuss syntactic and semantic analyses using LTAG for natural language phenomena such as long-distance dependencies, clausal complements and the analysis of raising and control, scope ambiguity and scrambling. Next to the theoretical work two applications will be discussed in details: the Tübingen Linguistic Parsing Architecture (TuLiPA; sourcesup.renater.fr/tulipa/) and the eXtensible MetaGrammar (XMG; http://xmg.phil.hhu.de/) illustrating how those TAG analyses can be implemented, tested and used.
The TAG formalism is one of the mildly context-sensitive grammar formalisms, widely used in modeling natural languages. The adequacy of language modeling with TAG is shown by a range of grammar implementations and tools (e.g. XTAG, TuLiPA, XMG). Its importance in computational linguistics is undoubtedly shown by intrinsic and successful research both on the theoretical and on the practical side: without seeking for a full listing of recent works, see e.g. linguistic analysis using TAG in Abeillé and Rambow (2000), the syntax-semantics interface by Kallmeyer and Joshi (2003), Kallmeyer and Romero (2008), discourse processing by Webber (2004), grammar formalisms by, e.g., Shieber and Schabes (2006), wide coverage grammar implementations by the XTAG project (XTAG Research Group, 2001) and tools for parsing with TAG: TuLiPA (Parmentier et al., 2008), XMG (Crabbé et al., 2013; Kallmeyer et al., 2016).
Low resource techniques in NLP - David R. Mortensen and Yulia Tsvetkov
State-of-the-art natural language processing (NLP) tools, such as text parsing, speech recognition and synthesis, text and speech translation, semantic analysis and inference, rely on availability of language-specific data resources. Such data resources are expensive, and these typically exist only for English and a few other geopolitically or economically important languages; these are called “resource-rich languages”. Most of the 7,000+ languages in the world—many with millions of speakers—are resource-poor from the standpoint of NLP.
To make NLP tools available in more languages, techniques have been developed to leverage prior linguistic knowledge and learn from just a few examples, or to project data resources from resource-rich languages using parallel (translated) data as a bridge. Yet, the challenge of building accurate NLP tools for low-resource languages remains, especially with the increasing access to information technologies from all over the world on the one hand, and the rise of data-hungry deep learning approaches on the other.
The goal of this course is to introduce the problem of doing NLP with limited data resources and to present the current state of the art in this area. We will present an overview of fundamental problems in low-resource NLP, of existing resources, and of key approaches that explore low-resource scenarios.
Semantics and Pragmatics
Formal semantics and pragmatics, and their origins in philosophy - Richmond Thomason and Zoltan Szabo
This course will introduce the concepts and methods of formal semantics and pragmatics, and say something about their origins in philosophy and linguistics.
Sessions will deal with the following topics:
- Truth-Conditions and Compositionality
- Reference and Quantification
- Time and Modality
- Intensions and mental attitudes
- Context and Content
- Common Ground and Conversational Update
- Implicature and figurative speech
Introduction to Pragmatics - Bart Geurts
There are two main ways of viewing human communication. Suppose that Fred promises Wilma to do the dishes by saying, “I’ll do the dishes.” One way of looking at Fred’s utterance is that it serves to convey Fred’s intention to do the dishes. On this construal, understanding speech acts requires that we take a psychological stance, for their purported purpose is to express what’s on the speaker’s mind. The other way is to adopt a social stance and view Fred’s promise as a means of modifying his relationship with Wilma: as a result of saying, “I’ll do the dishes”, Fred becomes responsible to Wilma for doing the dishes.
The opposition between psychological and social approaches to communication is the recurrent theme of this course, which focuses on three key topics in pragmatics: speech acts, common ground, and conversational implicatures. In each of these cases, psychological and social approaches will be compared, and ecumenical approaches considered. Among the topics discussed along the way will be presupposition, definite and indefinite reference, linguistic conventions, and the pragmatics of word meaning.
An opinionated guide to predicates of personal taste - Natasha Korotkova and Pranav Anand
This course is a focused examination of predicates of personal taste (PPTs) such as "tasty" and "fun"—the empirical discoveries, the theoretical landscape, their connection with subjective language. Recent work in formal semantics and philosophy of language has shown that the linguistic behavior of PPTs differs from that of other predicates (OPs) such as "round" and "popular". We will address the nature of the PPT-OP distinction through the following basic questions:
— Is the distinction categorical (as is often assumed in theoretical literature) or gradient (as subjectivity is treated in literature on sentiment analysis)?
— Are PPTs special because of the semantics, the pragmatics, or the epistemology and psychology of taste?
— Are other predicates involving judgment—aesthetic, moral, value—also PPTs? What is a reliable diagnostic across conceptual domains?
The course is structured in three parts. The first two days comprise a primer on the most-discussed empirical questions and will present a taxonomy of existing theories. Specifically, we will talk about truth-evaluabillity of PPTs, their conversational behavior, and normativity that differentiate PPTs from other expressions used to describe perception.
The next part focuses on less-studied puzzles. Day 3 will discuss overt tasters, introduced by "for" and "to", and multiple perspectives made available in questions and attitude reports, but constrained within one sentence by general rules governing the interpretation of noun phrases. Day 4 will turn to the source of direct experience requirement associated with PPTs and its relation to evidentiality, as well as similar requirements imposed by psychological predicates and dispositional generics. We will also talk about the nature of predicates like "find" that ban OPs in their complements.
Finally, on Day 5, we will examine the cognitive science perspective on taste attribution. We will focus on the philosophical literature on perceptual attribution and personal epistemology, as well as on neurophysiological research on aesthetic judgment.
Integrating compositional semantics and event semantics - Lucas Champollion and Maria Esipova
Semantics and pragmatics of temporal sequencing - Pranav Anand and Maziar Toosarvandani
Discourses mean more than just the sentences they are composed of. A sequence of past tense sentences can, for instance, be interpreted as forward moving in time (narrative progression) or as backshifted. This course will explore the source of these temporal inferences: Are they conventionalized, or do they arise from (possibly domain general) principles of conversation? Does narrative progression have the same source as backshifting? Do lexical and grammatical expressions constrain the possibilities for these inferences, and if so how?
Over five days, students will be introduced to existing accounts for narrative progression and backshifting, as well as to related phenomena that have not been systematically investigated in the past. Students will also be exposed to a range of corpora annotated for discourse and temporal relations (e.g., PDTB, ANNODIS), using them to shed new light on these questions.
More specifically, the course will cover the basics of temporal and event structure, reference time (Partee 1984, Hinrichs 1986, Dowty 1986, Webber 1988) and discourse relational (Kehler 2002, Asher and Lascarides 2003, Altshuler 2016:§3) theories of narrative progression and backshifting, varied uses of tense and their interaction with these temporal inferences (Sharvit 2008, Anand and Toosarvandani, to appear), and the semantic and pragmatic underpinnings of narrative progression (Klein 2009, Cumming 2015, Altshuler 2016:§2).
Probabilities of conditionals in modal semantics - Paolo Santorio and Justin Khoo
The central claim we plan on investigating is the well-known conjecture that the probabilities of conditionals are the conditional probabilities of the consequent given the antecedent. This claim (sometimes called simply ‘The Thesis’) is discussed extensively in classical papers and has generated recent work (e.g. Kaufmann, Bacon, Charlow) that is informed by mainstream theories in formal semantics. Some of the questions we plan on pursuing are:
i. What empirical/experimental evidence is there for the claim that probabilities of conditionals are conditional probabilities?
ii. To what extent, if at all, can standard truth conditional frameworks vindicate the Thesis?
iii. How do conditional semantics that are designed to vindicate the Thesis (e.g. van Fraassen’s, Kaufmann’s) compare to more traditional semantics for conditionals?
iv. Can expressivistic semantics for epistemic modals in the style of Yalcin, Swanson, Moss (and many others) help vindicate the Thesis?
v. What is the import of the foregoing for counterfactual conditionals?
One of our main goals is integrating methodologies from the philosophical logic and formal semantics literature, while also keeping an eye on relevant psychological research. We plan on presenting proofs in a way that is rigorous and accessible, while at the same time stating the relevant formal theories in frameworks that are fully integrated with contemporary research in semantics.
Graph formalisms for meaning representations - Sorcha Gilroy and Adam Lopez
Probabilistic systems for machine translation, summarization, and other applications are effective at many things, but often fail to preserve the compositional meaning of language. To preserve meaning, they must model meaning. As fuel for such models, several corpora have recently been annotated with meaning representations in the form of directed acyclic graphs (DAGs). How can we model these graphs in a probabilistic system?
This course will introduce models of graphs that are being actively researched as possible answers to this question. Since trees and strings are often modeled with context-free and regular languages, we will survey context-free and regular models of graphs. We will cover context-free graph languages and show how they are modeled by hyperedge replacement grammar. We will cover regular graph languages and show how they are modeled by monadic second-order logic, and how this relates to the finite automata that are typically used to model regular string and tree languages. At each step we will highlight the strengths and weaknesses of these formalisms as models of meaning representations, and we will conclude with a discussion of models that inherit desirable properties of both context-free and regular graph languages.
Topological epistemology of science - Kevin Kelly and Konstantin Genin
New type-theoretic tools in natural language semantics - Steve Awodey, Justyna Grudzinska, Marek Zawadowski, and Colin Zwanziger
Following the introduction of type-theoretic methods into natural language semantics (locus classicus Montague 1973), both formal natural language semantics and type theory have emerged as diverse fields of inquiry. Though simple type theory has maintained enduring popularity as a foundation for natural language semantics, further exchange of ideas between
these two fields has historically been limited. However, recent years have renewed this exchange, in particular seeing the application to semantics of modern type-theoretic tools such as dependent types (from Ranta 1994), monads (from Barker 2002, Shan 2002), and comonads (from Awodey et al. 2015). This workshop will bring together interested linguists and logicians to further the understanding of these tools and the development of their valuable linguistic applications.
Monads (respectively comonads) are a tool for seamlessly enriching semantic representations with extra outputs (respectively inputs). Such enrichments are integrated modularly on top of a core compositional semantics. These ‘extra outputs’ may naturally include modifications to discourse or common ground representations. This underlies the monadic
approach to dynamic semantics (Charlow 2014). Monadic methods have furthermore been used for an influential analysis of quantifier scope ambiguities (Barker 2002). As for ‘extra inputs’, an important case is provided by intensional phenomena. These are modeled with semantic representations which take in ‘intensionalized’ inputs, and are thus susceptible to a comonadic analysis (Awodey et al. 2015, Zwanziger 2017).
Dependent types greatly enrich the expressive power of type theory, and have found diverse applications in semantics as a result. A key goal for the workshop is to identify and discuss common themes between these. The applications addressed in detail will include tracking anaphoric dependencies and presuppositions in discourse (Grudzinska, Zawadowski 2014, 2017;
Bekki 2014, 2017), modeling lexical phenomena (e.g. selection restriction and coercions, Luo 2012; Luo, Chatzikyriakidis 2017), and modification phenomena (e.g. adjectival/adverbial modification, Luo, Chatzikyriakidis 2017). Finally, dependent types are implemented in proof assistants such as Agda, Coq, and Lean, providing a ready-made framework for computational semantics.
The workshop will consist of tutorials (aimed at the level of graduate students with some acquaintance with type theory) and research-level talks (which will build on the tutorial material).
Conlang playground: from linguistic research to creative fiction - Lori levin
Invented languages (Conlangs) have been proposed to unify humanity, bring rigor to thought, and enhance fictional experiences. Conlangs are playing an increasing role in science fiction and fantasy, with increasingly higher standards for everything from phonetic inventory to pervasive cognitive metaphors. The bar has been raised in the entertainment industry such that conlangs are fully developed by linguists and are complete with reference grammars.
In this course, we will address conlangs as a form of creative expression for linguists. Students will create small conlangs focusing on research areas of their choice. The underlying insight is that languages carve up semantic and pragmatic spaces differently and grammaticalize them differently. In order to create a truly innovative language that is not isomorphic to an existing language, students will use their knowledge of the intricacies of a semantic/pragmatic phenomenon and how it is typically grammaticalized in real languages, and then remix it into a novel grammaticalization, perhaps creating a fictional speech community whose culture and cognition are reflected in the new language. Phenomena that can be chosen as foci for this course include referentiality, definiteness, modality, conditionals, quantification, negation, possession, information structure, speech acts, conventional metaphors, lexicalization, argument realization, comparison, and many others. Structural aspects of language (phonology, morphology, and syntax) are also fair game for creativity.
The format of this course will be interactive and hands-on. It should probably be limited to less than 15 students. Each student (or small group) will produce (1) a short grammar sketch, (2) a corpus of sentences with interlinear glosses, literal paraphrases, and English translations, and (3) a short narrative to perform.
For questions relating to local information: firstname.lastname@example.org