Co-Sponsored by the White House Office of Science and Technology Policy (OSTP)
and Carnegie Mellon University.


WORKSHOP ON SAFETY AND CONTROL FOR ARTIFICIAL INTELLIGENCE

JUN 28, 2016 // CARNEGIE MELLON UNIVERSITY

 


The computer science community has been exploring the role of artificial intelligence (AI) in systems for more than a half-century. In the last few years, AI development has reached a threshold of practicability, and AI capability is now emerging in sectors ranging from vehicles, logistics, and military systems to health care, financial services, and smart cities. The economic and societal impacts could be dramatic, and investment in the development of AI applications is now a world-wide phenomenon.

Many technical leaders now believe that the principal limits on exploiting AI derive primarily from our confidence in the safety of these smart systems – that they will operate in a safe and controlled manner. Some AI experts have asserted that the ability to assure safety and control is more important to the future of AI even than improvements in the AI algorithms themselves.

The safety challenge is significant because of both the complexity of the AI systems and also the richness of their interactions with their human users and their operating environments. Consider, for example, the need for ensuring safety of an autonomous vehicle driving through a highway construction zone on a rainy night. The challenge increases when the AI systems are adapting themselves – changing their behavior – through machine learning. The challenge also increases when AI systems are interacting in complex ways with other separately-developed AI systems that are themselves learning and adapting.

The Public Workshop on Safety and Control for Artificial Intelligence (SÄ€F|ART|INT) is a jointly-sponsored event of the White House Office of Science and Technology Policy (OSTP) and Carnegie Mellon University. The workshop, scheduled for June 28th, 2016, will include keynote talks and panel discussions that explore the potential future of AI and AI applications, the emerging technical means for constructing safe and secure systems, how safety might be assured, and how we can make progress on the challenges of safety and control for AI. In other words, how can we construct productive collaborations of the AI technical community, the application community, and the assurance community?

On the day prior to the public workshop, Carnegie Mellon is separately hosting a technical workshop event to prepare for the main event. Discussions will focus on needs and challenges for AI safety and control and on how the AI and assurance research communities might productively collaborate. The results of these technical workshops will be presented at the workshop the next day. In preparation for the technical workshop event, Carnegie Mellon will publish an open solicitation for white papers related to the workshop topics.