Artificial intelligence (AI) is already impacting many aspects of people’s lives and society at large. At Carnegie Mellon University, we believe that AI must be designed, developed, and deployed responsibly to ensure accountability and transparency, and lead toward a more just and equitable world.
Carnegie Mellon has the expertise and ingenuity that is necessary to direct advances in AI toward social responsibility. CMU’s Responsible AI initiative brings together researchers and educators spanning computer science, engineering, decision sciences, philosophy, arts, economics, psychology, public policy, statistics, and business to make progress in:
- Translating research to policy and social impact: translating research insights into policy and positive social impact.
- Building community and serving our local and global communities: collaborating and co-designing with local communities and the public at large.
- Education and training: offering hands-on and experiential educational and research opportunities for students, staff and faculty.
- Partnerships: working collaboratively with partners to develop and deploy AI methodologies and tools that enable learning, practice, and research.
CMU’s Responsible AI initiative will evolve, in part, based on the interests and priorities of government, industry, non-profit, and community partners. Activities will include education and training workshops, research projects, collaborative projects, and community events.
Responsible AI is a university-wide initiative housed at the Block Center, with support from the School of Computer Science, that draws on expertise from across Carnegie Mellon.
Jodi is interested in understanding the impact of AI and automation in the workforce and is leading a group of researchers examining the impact of automation in the hospitality industry.
Rayid focuses on developing responsible AI methods and works with government agencies and nonprofits to help design, develop, and deploy AI systems that support equitable societal outcomes. He recently testified to the House Financial Services Committee’s Task Force on AI on reducing AI bias in the financial sector.
Hoda seeks to provide a stakeholder-oriented perspective on the use of AI technologies in socially high-stakes domains. As an example, she aims to translate (un)fairness and bias in public policy domains into computationally tractable measures that account for the entire Machine Learning pipeline.