Carnegie Mellon University

Governance and Accountability for Machine Learning:

Existing Tools, Ongoing Efforts and Future Directions

NeurIPS 2023 Tutorial
Monday, Dec. 11, 2023
9:45 a.m.—12:15 p.m. CST
New Orleans Ernest N. Morial Convention Center (Hall F)


The tutorial aims to familiarize the machine learning (ML) community with major existing artificial intelligence (AI) governance frameworks, ongoing AI policy proposals worldwide, and the concrete tools the research community has developed to adhere to standards and regulations applicable to ML systems in socially high-stakes domains. As a concrete governance challenge, we will focus on issues of bias and unfairness and overview pipeline-centric approaches to operationalize algorithmic harm prevention. The concluding expert panel is an opportunity for the ML community to hear diverse perspectives on the key AI governance challenges in the near future and how the ML research community can prepare for and support efforts to address those challenges.

Sponsored by the K&L Gates Endowment for Ethics and Computational Technologies at Carnegie Mellon University


  • Introduction (5 min)
    • Key definitions (accountability; governance)
    • Challenges unique to AI (compliance; enforcement)
  • Overview of existing policy guidance and frameworks (20 min)
    • Commonly addressed principles and issues
    • Major policy-making efforts and proposals
  • Overview of existing governance tools, practices and mechanisms (25 min)
    • Guidelines and best practices
    • Enforcement mechanisms
  • Case study: Searching for less discriminatory algorithmic systems (40 min)
  • Conclusion (10 min)
    • Calls to action for the ML community
    • Introduction to panelists
  • Panel discussion (40 min)
  • Q&A (10 min)


Hoda HeidariHoda Heidari is the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies at CMU, with joint appointments in the Machine Learning and Software and Societal Systems departments. She is also affiliated with the Human-Computer Interaction Institute and the Heinz College of Information Systems and Public Policy. Her research addresses issues of fairness and accountability through the use of ML in socially consequential domains.

Emily BlackEmily Black is an assistant professor of computer science at Barnard College, Columbia University. She previously was a postdoctoral scholar at Stanford University’s RegLab, where she worked on the fairness impacts of AI systems in tax audit selection. She recently presented a cross-disciplinary tutorial at FAccT, centered around the challenges and opportunities for operationalizing participatory design approaches to include stakeholders and affected communities in ML ideation and creation pipelines.

Dan HoDan Ho is the William Benjamin Scott and Luna M. Scott Professor of Law and a professor of political science at Stanford University. He serves as an appointed member to the National Artificial Intelligence Advisory Commission (NAIAC), advising the White House National AI Initiative Office, and as Senior Advisor on Responsible AI to the U.S. Department of Labor.

Expert Panel

Alex John LondonAlex John London, Moderator, is the K&L Gates Professor of Ethics and Computational Technologies, director of the Center for Ethics and Policy at CMU’s Department of Philosophy, and chief ethicist at CMU’s Block Center for Technology and Society. London’s work focuses on ethical and policy issues surrounding the development and deployment of novel technologies in medicine, biotechnology and AI, and on methodological issues in theoretical and practical ethics.

A. Feder CooperA. Feder Cooper is a scalable ML researcher working on reliable measurement and evaluation of ML systems. Cooper's contributions span distributed training, hyperparameter optimization, uncertainty estimation, model selection, and generative modeling, as well as related research in tech policy and law. Cooper is currently an affiliate at the Berkman Klein Center at Harvard.

Pauline KimPauline Kim is the Daniel Noyes Kirby Professor of Law at Washington University in St. Louis. Her research is on the use of big data and AI in the workplace and the implications of these technologies for employee privacy and anti-discrimination law. She has written widely on issues such as job security, employee privacy, employment discrimination and judicial decision-making.

Logan KoepkeLogan Koepke is a project director at Upturn, a nonprofit in Washington DC that advances equity and justice in the design, governance, and use of technology. His research and advocacy sits at the intersection of technology and civil rights. He helps lead Upturn's federal policy advocacy on the use of algorithmic systems in key civil rights contexts.

Inioluwa Deborah RajiInioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability and algorithmic auditing. She is currently a fellow at the Mozilla Foundation.

Reva SchwartzReva Schwartz is a research scientist in the Information Technology Laboratory at the National Institute of Standards and Technology (NIST). She serves as principal investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.