Exploring AI Dilemmas: A Glimpse into the Future of Ethical AI Development
In the world of artificial intelligence (AI), development moves fast, and often the focus is on the "now." But Maarten Sap, alongside a team from Carnegie Mellon University and the University of Washington, is taking a different approach. Their project, backed by the Block Center seed funding, looks beyond today’s technology to explore the AI dilemmas of tomorrow. The goal? To anticipate the future challenges AI might bring and help society navigate the complex landscape of AI ethics.
At the heart of their project is the concept of speculative design—a way of thinking that goes beyond what AI can currently do and looks ahead to what it might be capable of in ten years or more. It’s about imagining use cases that don’t yet exist and thinking through their potential harms and benefits. By doing so, Maarten Sap and his team hope to not only anticipate future dilemmas but also improve consensus on what kinds of AI should or shouldn’t be developed, ultimately helping policymakers and governance bodies make informed decisions.
The project emerged from Sap’s collaborations during his time at the University of Washington, particularly in conversations around dilemmas in computer security. These early discussions led to the broader question of AI dilemmas—situations where an AI system might have the potential to both cause and prevent harm, and where ethical lines become blurred. One compelling example Sap highlights is the use of AI in mental health. While AI-powered chatbots could provide much-needed support to those struggling with depression, they could also potentially cause harm if they encourage harmful behaviors. It’s a delicate balance, and these are the kinds of dilemmas that need careful consideration as AI technology advances.
PARTICIP-AI AND decision making
Through a framework called PARTICIP-AI, Sap’s team has begun exploring these dilemmas with input from a wide range of participants. What’s particularly interesting is the way people envision AI in their personal lives versus how AI is currently being developed for businesses. Many of the participants focused on personal AI use cases—AI that could help with everyday tasks like managing taxes or improving mental health—while much of today’s AI development is centered around workplace efficiency.
One of the project’s surprising findings is that participants not only identified potential benefits of AI but also recognized situations where certain AI developments should be stopped in their tracks. After reflecting on hypothetical future scenarios, many participants realized that some technologies might actually be better left undeveloped due to the potential for harm.
Looking ahead, the project aims to provide a framework that incorporates the perspectives of everyday people into the development of AI systems. Instead of leaving these decisions in the hands of a few tech developers, Sap’s team wants to involve people at every stage of the process. This bottom-up approach could be crucial in creating more responsible and ethically sound AI systems that align with the needs and values of society as a whole.
Sap’s work not only sheds light on the challenges of ethical AI development but also emphasizes the importance of giving people a voice in shaping the technologies of the future. In a world where AI is increasingly woven into the fabric of our daily lives, this type of research could be instrumental in ensuring that AI serves the greater good while minimizing its risks.
As Sap continues this work, the support from the Block Center seed fund has been instrumental in getting the project off the ground. While it’s still early days, the project has the potential to shape how AI development is governed and how society thinks about the ethical implications of this powerful technology.
Sap is optimistic about the future of this project and is excited to explore its next phases, which will dive deeper into how people's demographics, values, and AI literacy influence their views on AI dilemmas. This kind of work is essential if we are to develop AI systems that reflect the diversity and complexity of human experiences.
If anything is clear, it's that the future of AI is not just about technological advancement—it's about the ethical and societal frameworks we build around it. And with projects like Sap’s, we're one step closer to navigating that future responsibly.