Causality is a fundamental notion in science and engineering,
and one of the fundamental problems in the field is how to find the causal structure or the underlying
causal model. For instance, one focus of this workshop is on causal discovery, i.e.,
how can we discover causal structure over a set of variables from observational data with automated procedures?
Another area of interest is on how a causal perspective may help understand and solve advanced
machine learning problems.
Recent years have seen impressive progress in theoretical and algorithmic
developments of causal discovery from various types of data (e.g., from i.i.d. data, under distribution
shifts or in nonstationary settings, under latent confounding or selection bias, or with missing data),
as well as in practical applications (such as in neuroscience, climate, biology, and epidemiology).
However, many practical issues, including confounding, large scale of the data, the presence of
measurement error, and complex causal mechanisms, are still to be properly addressed, to achieve
reliable causal discovery in practice.
Moreover, causality-inspired machine learning (in the context of transfer learning,
reinforcement learning, deep learning, etc.) leverages ideas from causality to improve generalization,
robustness, interpretability, and sample efficiency and is attracting more and more interests in
Machine Learning (ML) and Artificial Intelligence. Despite the benefit of the causal view in transfer
learning and reinforcement learning, some tasks in ML, such as dealing with adversarial attacks and
learning disentangled representations, are closely related to the causal view but are currently
underexplored, and cross-disciplinary efforts may facilitate the anticipated progress.
This workshop aims to provide a forum for discussion for researchers
and practitioners in machine learning, statistics, healthcare, and other disciplines to share their
recent research in causal discovery and to explore the possibility of interdisciplinary collaboration.
We also particularly encourage real applications, such as in neuroscience, biology, and climate science,
of causal discovery methods.
There are two tracks of submissions: paper track and dataset track.
For the paper track, we invite submissions on all topics of causal discovery and causality-inspired ML,
including but not limited to:
- Causal discovery in complex environments, e.g., in the presence of distribution shifts,
latent confounders, selection bias, cycles, measurement error, small samples, or missing data
- Efficient causal discovery in large-scale datasets
- Causal effect identification and estimation
- Real-world applications of causal discovery, e.g. in neuroscience, finance, climate, and biology
- Assessment of causal discovery methods and benchmark datasets
- Causal perspectives on the problem of generalizability, transportability, transfer learning, and life-long learning
- Causally-enriched reinforcement learning and active learning
- Disentanglement, representation learning, and developing safe AI from a causal perspective
Submitted papers should follow the requirements for NeurIPS 2020 submissions. The length of submissions
is flexible, but is limited to eight content pages, including all figures and tables; additional pages
containing only references are allowed. Please format your submission using the NeurIPS 2020 LaTeX style
file. If needed, authors may additionally submit a supplementary material. According to the workshop
guidance provided by the conference, “work that is presented at the main NeurIPS conference should not
appear in a workshop, including as part of an invited talk.”
For the dataset track, we invite submissions of datasets from various fields, e.g., neuroscience, biology,
finance, and climate, that are appropriate for evaluating the performance of causal discovery methods.
Submissions should include (1) the collected dataset (the file or a link is required) and (2) a description
of the dataset in PDF format (with NeurIPS 2020 LaTeX style file), limited to four pages. The description
should include the “ground truth” causal structure from domain knowledge or experiments and the testing
results of at least one causal discovery method.
All accepted papers and datasets will be available on the workshop website. At the end of your paper
submission, please indicate whether you would like an extended version of the submission to be
considered for publication in a journal special issue. According to the feedback
from authors, we will further decide whether to publish selected papers in proceedings or a journal
special issue.
Key Dates
- Submission: October 14, 2020
- Notification: October 30, 2020
- Camera-ready and slides: November 14, 2020
- Video submission: November 14, 2020
- Workshop: December 11, 2020
Submission Website
Papers and datasets can be submitted through CMT: https://cmt3.research.microsoft.com/CDML2020/.
Double-blind Review
Authors must not include any identifying information of the authors
(names, affiliations, etc.) or links and self-references that may reveal the authors' identities.
The organizers aim to provide feedback from three reviewers per submission, which will assess the
submission based on relevance, novelty, and potential for impact. Reviewers are asked to assess the
submission (Reject/Borderline/Accept) as well as provide written feedback. There will be no additional
rebuttal period.
Click here for all accepted papers.
Orals (Paper ID/Authors/Title): |
10. Debarun Bhattacharjya (IBM Research); Karthikeyan Shanmugam (IBM Research NY); Tian Gao (IBM Research); Dharmashankar Subramanian (IBM Research). Structure Discovery in (Causal) Proximal Graphical Event Models. |
19. Ignavier Ng (U. Toronto); Sebastien Lachapelle (Mila, Université de Montréal); Nan Rosemary Ke (Mila); Simon Lacoste-Julien (Mila). On the Convergence of Continuous Constrained Optimization for Structure Learning. |
40. Tineke Blom (U. Amsterdam); Joris M. Mooij (U. Amsterdam). Robust Model Predictions via Causal Ordering. |
45. Ashlynn N Fuccello (U. Pittsburgh); Daniel Yuan (U. Pittsburgh); Panayiotis Benos (U. Pittsburgh); Vineet Raghu (U. Pittsburgh). Improving Constraint-Based Causal Discovery from Moralized Graphs. |
Spotlights (Paper ID/Authors/Title): |
2.Alexis Bellot (University of Cambridge); Mihaela van der Schaar (UCLA). A Kernel Two-Sample Test for Unbiased Decisions. |
4. Shantanu Gupta (CMU); Zachary Lipton (CMU); David Childers (CMU). Estimating Treatment Effects with Observed Confounders and Mediators. |
5. Negar Hassanpour (U Alberta); Russell Greiner (U Alberta). Variational Auto-Encoder Architectures that Excel at Causal Inference. |
6. Elan Rosenfeld (CMU); Pradeep Ravikumar (CMU); Andrej Risteski (CMU). The Risks of Invariant Risk Minimization. |
8. Amir-Hossein Karimi (MPI for Intelligent Systems, Tübingen); Bernhard Schölkopf (MPI for Intelligent Systems); Isabel Valera (MPI for Intelligent Systems). Algorithmic Recourse: from Counterfactual Explanations to Interventions. |
9. Masahiro Kato (Cyberagent); Takuya Ishihara (U. Tokyo); Junya Honda (U. Tokyo / RIKEN); Yusuke Narita (Yale University). Efficient Adaptive Experimental Design for Average Treatment Effect Estimation. |
11. Yue Yu (Lehigh University); Tian Gao (IBM Research). DAGs with No Curl: An Efficient DAG Structure Learning Approach. |
13. Razieh Nabi (JHU); Joseph J Pfeiffer (Microsoft); Murat Bayir (Microsoft); Denis Charles (Microsoft); Emre Kiciman (Microsoft Research). Causal Inference In The Presence of Interference In Sponsored Search Advertising. |
14. Zach Wood-Doughty (JHU); Ilya Shpitser (JHU); Mark Dredze (JHU). Sensitivity Analyses for Incorporating Machine Learning Predictions into Causal Estimates. |
17. Benjamin Heymann (Criteo); Michel DE LARA (Ecole des Ponts ParisTech); Jean-Philippe CHANCELIER (Ecole des Ponts ParisTech). Causal Inference with Information Fields. |
20. Edward De Brouwer (KU Leuven); Adam Arany (KU Leuven); Jaak Simm (KU Leuven); Yves Moreau (KU Leuven). Latent Convergent Cross Mapping. |
22. Claudia Shi (Columbia University); Victor Veitch (Google; U. Chicago); David Blei (Columbia University). Invariant Representation Learning for Treatment Effect Estimation. |
23. Elias Chaibub Neto (Sage Bionetworks). Towards causality-aware predictions in static anticausal machine learning tasks: the linear structural causal model case. |
28. Emily G Saldanha (Pacific Northwest National Laboratory); Robin Cosbey (PNNL); Ellyn Ayton (PNNL); Maria Glenski (PNNL); Joseph A Cottam (PNNL); Karthik Shivaram (Tulane University); Brett Jefferson (PNNL); Brian Hutchinson (Western Washington University); Dustin Arendt (PNNL); Svitlana Volkova (PNNL). Evaluation of Algorithm Selection and Ensemble Methods for Causal Discovery. |
32. Benjamin Aubin (CEA Saclay); Agnieszka Słowik (U. Cambridge); Martin Arjovsky (NYU); Leon Bottou (FAIR); David Lopez-Paz (FAIR). Linear unit-tests for invariance discovery. |
35. Mehrdad Farajtabar (DeepMind); Andrew Lee (DeepMind); Yuanjian Feng (DeepMind); Vishal Gupta (DeepMind); Peter Dolan (DeepMind); Harish Chandran (DeepMind); Martin Szummer (DeepMind). Balance Regularized Neural Network Models for Causal Effect Estimation. |
36. Raanan Y. Rohekar (Intel Labs); Yaniv Gurwicz (Intel Labs); Shami Nisimov (Intel Labs); Gal Novik (Intel Labs). A Single Iterative Step for Anytime Causal Discovery. |
41. Paris D. L. Flood (U. Cambridge); Ramon Viñas Torné (U. Cambridge); Pietro Lió (U. Cambridge). Investigating Estimated Kolmogorov Complexity as a Means of Regularization for Link Prediction. |
43. Dung Daniel T Ngo (U. Minnesota); Logan Stapleton (U. Minnesota); Vasilis Syrgkanis (Microsoft Research); Steven Wu (CMU). Incentivizing Bandit Exploration: Recommendations as Instruments. |
46. Conor Mayo-Wilson (U. Washington); Konstantin Genin (U. Tubingen). Statistical Decidability in Linear, Non-Gaussian Causal Models. |
51. Andrew R Lawrence (causaLens); Marcus Kaiser (causaLens); Rui Sampaio (causaLens); Maksim Sipos (causaLens). Data Generating Process to Evaluate Causal Discovery Techniques for Time Series Data. |
52. Harvineet Singh (NYU); Finale Doshi-Velez (Harvard); Himabindu Lakkaraju (Harvard). Learning Under Adversarial and Interventional Shifts. |
53. Noah Weber (JHU); Levi Boyles (Microsoft); Shuayb Zarar (Microsoft). Identifying the Causal Effects of Cross-World Policies. |
54. Hanti Lin (UC Davis). The Nonidentifiability Problem in Causal Discovery. |
55. Jimi Kim (DS4C); DongHwan Jang (Mind's Lab); Seojin Jang (Data Science for COVID-19 South Korea); Woncheol Lee (Data Science for COVID-19 South Korea ); Joong Kun Lee (Data Science for COVID-19 South Korea). DS4C Patient Policy Province Dataset: a Comprehensive COVID-19 Dataset for Causal and Epidemiological Analysis. |
56. Mingming Gong (U. Melbourne); Peng Liu (U. Pittsburgh); Frank Sciurba (U. Pittsburgh); Petar Stojanov (CMU); Dacheng Tao (U. Sydney); George Tseng (U. Pittsburgh); Kun Zhang (CMU); Kayhan Batmanghelich (U. Pittsburgh). Unpaired Data Empowers Association Tests. |
The workshop will be held on Friday, December 11st, 2020 (Eastern Standard Time). Please register on NeurIPS website: https://nips.cc/. Prerecorded video are available now: https://nips.cc/virtual/2020/protected/workshop_16110.html.
The workshop will consist of three main parts: pre-recorded keynotes with a live discussion,
pre-recorded oral talks, and a virtual poster session with spotlight
talks. Specifically, we will have seven keynote talks, covering recent
developments in causal discovery and inference, the connection between causal modeling and machine
learning, and applications of causal analysis.
Please attend the workshop via the NeurIPS offical website: https://nips.cc/virtual/2020/protected/workshop_16110.html. After each keynote, there will be 5 minutes for a live Q&A. You may post your questions in Rocket.Chat before or during the keynote time. The poster session and the virtual coffee break will be on Gather.Town. There is no Q&A for spotlight talks, but all papers with spotlight talks will attend the poster session and you can interact with authors there.
More details will come soon.
Session 1 (Session chair: Bernhard Schölkopf) |
|
9:50 am - 10:00 am |
Opening remarks (Biwei Huang & Sara Magliacane) |
10:00 am - 10:30 am |
Keynote 1: Aapo Hyvärinen Talk Title: Causal Discovery: Linear, Nonlinear, and Something in Between |
10:30 am - 11:00 am |
Keynote 2: Clark Glymour Talk Title: The Evaluation of Discovery |
11:00 am - 11:10 am |
Oral 1: Ashlynn Fuccello Talk Title: Improving Constraint-Based Causal Discovery from Moralized Graphs
|
11:10 am - 11:40 am |
Virtual coffee break on Gather.Town |
Session 2 (Session chair: Thomas Richardson) |
|
11:40 am - 12:10 pm |
Keynote 3: James Robins Talk Title: On the (Im)Possibility of Assumption-Free Inference for Causal Effects Estimated with Machine Learning |
12:10 pm - 12:20 pm |
Oral 2: Tineke Blom Talk Title:Robust Model Predictions via Causal Ordering |
12:20 pm - 12:30 pm |
Oral 3: Karthikeyan Shanmugam Talk Title: Structure Discovery in (Causal) Proximal Graphical Event Models |
12:30 pm - 1:00 pm |
Virtual coffee break on Gather.Town |
1:00 pm - 1:30 pm |
Spotlight 1 |
1:30 pm - 2:30 pm |
Poster session 1 on Gather.Town |
2:30 pm - 3:00 pm |
Virtual coffee break on Gather.Town |
Session 3 (Session chair: Peter Spirtes) |
|
3:00 pm - 3:30 pm |
Keynote 4: Dominik Janzing Talk Title: Causal Version of Principle of Insufficient Reason and Max Ent |
3:30 pm - 4:00 pm |
Keynote 5: Caroline Uhler Talk Title: Causal Transport Problems |
4:00 pm - 4:30 pm |
Virtual coffee break on Gather.Town |
4:30 pm - 5:00 pm |
Keynote 6: Karthika Mohan Talk Title: Graphical Models for Processing Missing Data |
5:00 pm - 5:10 pm |
Oral 4: Ignavier Ng Talk Title: On the Convergence of Continuous Constrained Optimization for Structure Learning
|
5:10 pm - 5:40 pm |
Virtual coffee break on Gather.Town |
Session 4 (Session chair: Mingming Gong) |
|
5:40 pm - 6:10 pm |
Keynote 7: Shohei Shimizu Talk Title: Linear Non-Gaussian Models with Latent Variables for Causal Discovery |
6:10 pm - 6:40 pm |
Spotlight 2 |
6:40 pm - 7:40 pm |
Poster session 2 on Gather.Town |
7:40 pm - 7:50 pm |
Closing remarks (Peter Spirtes) |