Artificial intelligence (AI) is already impacting many aspects of people’s lives and society at large. At Carnegie Mellon University, we believe that AI must be designed, developed, and deployed responsibly to ensure accountability and transparency, and lead toward a more just and equitable world.
Carnegie Mellon has the expertise and ingenuity that is necessary to direct advances in AI toward social responsibility. CMU’s Responsible AI initiative brings together researchers and educators spanning computer science, engineering, decision sciences, philosophy, arts, economics, psychology, public policy, statistics, and business to make progress in:
- Translating research to policy and social impact: translating research insights into policy and positive social impact.
- Building community and serving our local and global communities: collaborating and co-designing with local communities and the public at large.
- Education and training: offering hands-on and experiential educational and research opportunities for students, staff and faculty.
- Partnerships: working collaboratively with partners to develop and deploy AI methodologies and tools that enable learning, practice, and research.
CMU’s Responsible AI initiative will evolve, in part, based on the interests and priorities of government, industry, non-profit, and community partners. Activities will include education and training workshops, research projects, collaborative projects, and community events.
Responsible AI is a university-wide initiative housed at the Block Center, with support from the School of Computer Science, that draws on expertise from across Carnegie Mellon.
The Responsible Voter's Guide to Generative AI and Political Campaigning
Generative AI (GenAI) allows its users to create realistic images, videos, audio and text, rapidly, cheaply, and at scale. These capabilities can be useful in many contexts, but during elections, they could also be misused to manipulate and deceive voters.