Carnegie Mellon University
March 14, 2019

The Balance of AI, Ethics and the Military

CMU expert discusses university's role in shaping nationwide discussion

By Jason Maderer

Jason Maderer
  • Marketing and Communications
  • 412-268-1151

This week, the Defense Innovation Board (DIB) holds a series of meetings and a public listening session at Carnegie Mellon University as the DIB explores the future of ethics and artificial intelligence. It is the second in a series of three events the DIB is hosting across the country as it looks to develop and propose principles to guide ethical and responsible use of AI by the Department of Defense (DoD), including research, development and applications.

danks-310x400-min.jpg
David Danks

Specifically, the DIB is looking to make recommendations to the DoD that are consistent with an ethics-first approach and uphold existing legal norms around warfare and human rights, while continuing to carry out the Department's enduring mission to keep the peace.

David Danks, department head & L.L. Thurstone Professor of Philosophy and Psychology in the Dietrich College of Humanities and Social Sciences, studies ethics and AI. As co-host for this week's event at Carnegie Mellon, he is one of the experts the DIB is looking to for guidance and feedback.

CMU Marketing and Communications recently talked to Danks about his thoughts about AI, ethics and the university's role in helping to guide a nationwide discussion.

What are the most important things that society has to think about when it comes to ethics, AI and the military?

AI is not intrinsically good or bad, but can be used for good or ill purposes. Ultimately, AI is basically mathematics coupled with a bunch of ideas about algorithms. So it's very similar to an internal combustion engine, where the ethical impact of the potential applications should guide our thinking.

In this context, we need to think about the ways that AI and other technology are being used to advance the national security interests of American citizens. We want technology that actually advances our goals, and not technology that people simply think is cool.

Part of that is having the proper understanding of the actual challenges faced by those protecting our national security. Recently, the U.S. Army designated Carnegie Mellon as the home of its new AI Task Force. This could be really beneficial for our university, as it will help CMU researchers get a better sense of what the military is hoping to address and solve.

What are the other reasons Carnegie Mellon should be involved in this conversation?

Perhaps the most important reason is to potentially help the military achieve its goals in more ethical ways. We can potentially shape the way the technology is developed and used by the military to protect national security interests.

Of course that means we run the risk of being complicit in bad outcomes or abuses. Therefore it's important that our engagement is coupled with various types of oversight and a general vigilance. CMU should play in a role in influencing how they use the technology, and you can't do that if you're sitting on the sidelines.

How should AI be used?

The AI community needs to ensure that the technology isn't intentionally or accidentally used to violate people's rights and civil liberties. We also must be sure that AI is used to further the values of the general population rather than a contractor or specific subset within the military. The military must advance the interests of the citizenry and not just itself.

I think the most important uses of robotics, AI and the military are not about robotic soldiers on the battlefield. Although there are definitely ethical challenges there, I instead think more about the amount of AI used to process signals intelligence.

People don't look at raw data anymore in any practical way. They're looking at data shaped by machine learning models. And that information serves as the basis for meaningful decisions a military leader makes during wartime.

For instance, humans aren't very good at looking at overhead imaging from satellites and determining if something is a truck or a school bus. We depend on machine learning. While other people are thinking about killer robots, I'm thinking more about the human who makes the decision to drop a bomb on a building that turns out to be a hospital because the classification algorithm misclassified it.

Killer robots are a long way away. Bombs getting dropped due to a misclassification? I'd be shocked if it hasn't happen already.

In the race for technology vs. ethics, is ethics keeping pace? Is it even possible?

That's exactly what a lot of us at CMU are wrestling with. How do we do research, education, engagement and outreach that truly brings together the smartest people in AI, machine learning, public policy, philosophy and other relevant fields for effective collaboration?

There just aren't enough ethicists that understand AI. There aren't enough AI researchers that really understand ethics. How can we get both sides to understand enough about the other to foster effective collaboration?

It's a challenge, but one that we're tackling here at CMU.