David Danks Wins 2017 Andrew Carnegie Fellowship-Dietrich College of Humanities and Social Sciences - Carnegie Mellon University

Wednesday, April 26, 2017

David Danks Wins 2017 Andrew Carnegie Fellowship

Danks’ Project Will Explore Trust and Autonomous Technologies

Carnegie Corporation of New York has named Carnegie Mellon University’s David Danks a 2017 Andrew Carnegie Fellow. The 35 selected fellows will receive a total of $7 million in funding, or $200,000 each, making it the most generous stipend for humanities and social sciences research available.

Danks, the L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy, will use the fellowship to explore human trust in the age of autonomous technologies. Other winning proposals address issues such as inequity in U.S. education, radicalization via social media, voting and election processes, the global increase in violence against women in politics and the legal limbo facing immigrants.

"The health of our democracy depends on an informed citizenry, and our universities, academies and academic associations play an essential role in replenishing critical information and providing knowledge through scholarship," said Vartan Gregorian, president of Carnegie Corporation of New York. "The Andrew Carnegie Fellows Program is designed to support scholarship that brings fresh perspectives from the social sciences and humanities to the social, political, and economic problems facing the United States and the world today."

Well known for using computational cognitive science to develop computational models to describe, predict and, most importantly, explain human behavior, Danks has also become a leading expert in the ethics of artificial intelligence. As autonomous technologies become more prevalent, Danks believes a structure must be established to guide their use, assess their impacts, develop policies and regulations and inform the public.

To do this, Danks will focus on the key relationship of trust, both between different people, and between people and the technology.

"When David gets interested in a topic, the array of philosophical, psychological, computational and mathematical tools he can bring to bear is amazing. A few years ago, he became interested in the ethics of electronic surveillance and cyber warfare and how human decision in the presence of semi-autonomous agents will play in these arenas. This work led him to focus on the unique role that trust plays in our increasingly technological culture. I am really looking forward to what will emerge from the work David will now get to do on trust and autonomous technology,thanks to the Carnegie Fellows Program," said Richard Scheines, dean of the Dietrich College of Humanities and Social Sciences.

Four technologies will be a part of Danks’ fellowship work: self-driving vehicles; autonomous kinetic and cyber weapons systems; medical decision systems and autonomous robots; and privacy and mass surveillance. In the first phase, Danks will develop a general framework and principles for identifying and describing threats and opportunities to trust for new autonomous technologies. A second component will push that theory into practice.

Danks also plans to use the fellowship to fund visiting scholars at CMU, as well as host a weeklong workshop during the summer of 2018 for an international group of external scholars, practitioners and decision-makers.

"Trust is critical for human flourishing, both in our relationships with others and our use of technologies. But these relations of trust face diverse challenges and opportunities because of the introduction and proliferation of autonomous technologies," Danks said. "I’m grateful to Carnegie Corporation of New York for their support as we start the work towards providing a systematic conceptual framework and principles for understanding the potential and actual impacts of autonomous systems on this key aspect of our personal, social and political lives."

Related Articles:

Model Driverless Car Regulations After Drug Approval Process, AI Ethics Experts Argue
Self-driving, but not self-regulating
The Ethics of Cyberconflict 

_____
By Shilo Rea