Carnegie Mellon University

Research

Researchers from Dietrich College are at the top of their fields. They’ve developed computer models that can help diagnose brain dysfunctions and can identify a person’s thoughts and emotions; educational software that has raised student achievement in underperforming school districts; and web-based tools for citizens to use to deliberate issues that affect their communities.

Learn more about some of our recent work below.

polarization

Polarization for Controversial Scientific Issues Increases With More Education

A commonly proposed solution to help diffuse the political and religious polarization surrounding controversial scientific issues like evolution or climate change is education. However, CMU researchers found that the opposite is true: people’s beliefs about scientific topics that are associated with their political or religious identities actually become increasingly polarized with education, as measured by years in school, science classes and science literacy.

moving beyond

Moving Beyond Nudges to Improve Health and Health Care Policies

With countries around the world struggling to deliver quality health care and contain costs, a team of behavioral economists led by CMU’s George Loewenstein believes it’s time to apply recent insights on human behavior to inform and reform health policy.

supportive spouses

Supportive Relationships Linked to Willingness to Pursue Opportunities

Research on how our social lives affects decision-making has usually focused on negative factors like stress and adversity. Less attention, however, has been paid to the reverse: What makes people more likely to give themselves the chance to succeed? CMU Psychology Professor Brooke Feeney made an important discovery.

driverless car

Model Driverless Car Regulations After Drug Approval Process, AI Ethics Experts Argue

Autonomous systems — like driverless cars — perform tasks that previously could only be performed by humans. In a new IEEE Intelligent Systems Expert Opinion piece, CMU artificial intelligence ethics experts David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that autonomous systems will perform safely and reliably.