Carnegie Mellon University

AI & Analytics for Good

This thrust of the Block Center focuses on research into the ways that machine learning, neural networks, and other types of artificial intelligence, as well as big data and advanced analytics, can be harnessed for purposes that measurably strengthen our social fabric and improve quality of life for all people. The Center is particularly interested in research that investigates how AI will improve public safety, create smarter public services, and impact fairness and equality toward disadvantaged citizens.

As machines are relied on more and more in all aspects of society from employment and transportation to criminal justice and health care, issues of trust and transparency will also become paramount as human beings adjust to the increased presence of robots, automation, and other disruptive technologies in everyday life. Machines may be very good at loan granting, resume screening, health diagnosis, and product recommendation but human beings need to trust that the decisions are consistent and reflective of the social norms of the community. The Block Center takes a proactive approach to these critical problems.

Already a transformative force in the world, artificial intelligence has myriad applications for social good. Block Center projects in this research area–and the Center’s support to the Responsible AI initiative–focus on how these technologies can be harnessed for the public good, and seeks to ensure these systems are fair, accountable, transparent, and ethical.


AI, When the Stakes Are High
- Prof. Amelia Haviland

 


The Ethics of AI in Business
- Prof. Tae Wan Kim

Projects

Can We Automate Fairness?

statue of justice with code behind

Machines are making more and more decisions for us every day—some of great importance. Heinz College faculty member Alexandra Chouldechova is laying the methodological groundwork for fairer, more transparent predictive techniques. Risk assessment scores, which are data-driven predictive scores determined by an algorithm, have seen increased usage in U.S. courtrooms. Recent reporting has revealed that data-driven predictive models have the potential to amplify prejudices, and such algorithms can inject greater unfairness into a system that is already slanted against certain groups. Chouldechova is working to solve this problem with research that challenges conventional wisdom in the assessment industry and provides a framework for designing and vetting better predictive algorithms. She says while machine predictions are valuable in decision support—and research shows they are generally more accurate than human predictions—that it’s essential to ensure they are as fair (or even fairer) than humans can be.

Read the full story.

Predicting Fire Risk FOR SMARTER Inspections

Fire Chief with Trucks

Working with the Pittsburgh Bureau of Fire (PBF), the Fire Risk Analysis team is using historical fire incident and inspection data, coupled with business permits and property condition information, to develop predictive models of structure fire risk for commercial properties that will inform a powerful tool for fire prevention. PBF conducts regular fire inspections of commercial properties, as stipulated by the municipal fire code, but properties are not always inspected in the most effective order. With so many properties to inspect, a high-risk property may not be inspected until it is too late. This research seeks to create a method of prioritization that fast-tracks inspections of properties with the greatest risk of fire, allowing PBF to proactively target inspections for those properties and prevent fires before they break out. This method will use machine learning models to inform the prediction probability of fire risk, and visualize the results with an interactive map. By prioritizing properties most at risk, the PBF can not only address the risk of individual structures, but also dramatically improve the safety of surrounding businesses and residents.

Making AI Transparent

Head with data

Artificial intelligence is the backbone of a wide variety of computer decision-making processes, from selecting advertisements to fit an individual's interests to pinpointing tumor cells in medical images. While we know how these data-driven systems are designed to make decisions, we often do not know why they make specific choices. In order to hold these automated systems accountable to their choices and enforce ethical decision-making practices, Professor Anupam Datta is utilizing accounting tools to identify systematic patterns inherent to automated decision-making. His work has revealed the privacy and security shortcomings of targeted content and other decision-making systems, as well as resulted in the development of an accountability tool chain to confirm the privacy complicance of real-world automated systems in several key sectors, including advertising, criminal justice and healthcare.

In collaboration with Microsoft, Professor Datta is working to further implement large-scale, real-world tools to enforce the accountability and security of automated decision-making.

Watch a video on the scope and significance of this research.

Reducing Bias in Child Welfare Interventions

Child welfare

Each year, child protection agencies in the United States receive approximately 3.6 million referral calls related to potential cases of child abuse and neglect. Currently, the process of deciding whether a case merits further in-person investigation varies by jurisdiction, and as these situations are often complex, it is difficult to determine the correct course of action based on a single referral call. By using predictive analytics based on administrative data, case workers are able to identify problematic cases in real-time, which allows for better prioritization of resources and administration of services. However, ensuring the accuracy, fairness and trustworthiness of risk assessment models is vital to building public trust in the deployment of these technologies.

In partnership with the Allegheny County Department of Human Services, we are developing counterfactual risk assessment models to reduce bias in the decision-making process for child welfare interventions and inform the next generation of risk assessment tools.

Affiliated Faculty:
Alexandra Chouldechova, Assistant Professor of Statistics and Public Policy
Edward Kennedy, Assistant Professor of Statistics

Improving Breast Cancer DIagnosis with Machine Learning

healthcare-tech-block-center.jpg

Advances in image recognition and other deep learning techniques have revolutionized the healthcare sector. While most of these applications have focused on improving the accuracy of medical diagnosis and disease classification, we are applying deep learning models to enhance doctors' clinical and diagnostic training. Combining computer vision techniques with human-centered visualizations and design principles, these technologies will help make the results of AI-based diagnostic models interpretable to medical experts who may have limited AI expertise.

Using mammogram data from the UPMC Magee-Womens Hopsital, this interdisciplinary team aims to not only increase the detection rates and reduce false positives in breast cancer screenings, but to also help doctors identify their strengths, weaknesses, biases and blind spots in the diagnostic process.

Affiliated Faculty:
Zachary Chase Lipton, Assistant Professor of Business Technologies
Adam Perer, Assistant Research Professor, Human-Computer Interaction Institute

Uncovering the Source of Machine Bias

Image tagging

Developments in machine-assisted decision-making have shown great promise in numerous fields. In particular, these algorithms have transformed the hiring process by narrowing down the initial field of candidates to a smaller set of applicants whose resumes satisfy a set of data-driven criteria. However, these systems are not purely objective. Like the humans that programmed them, AI technologies are also prone to bias. When trained on biased or flawed data, machine learning algorithms may inherit, or even amplify, human bias. In the case of the hiring process, biased data may cause a decision-making algorithm to discriminate against applicants belonging to a particular demographic group.

This initiative aims to refine the data collection and algorithm structuring processes in automated job hiring to address the sources of machine bias early on in the development of decision-making algorithms.

Affiliated Faculty: 
Duyu Chen, Ph.D. Student, Tepper School of Business
Yan Huang, Assistant Professor of Business Technologies
Param Vir Singh, Carnegie Bosch Associate Professor of Business Technologies

AI For Smart Community Work

block-center-smart-communities.jpg

The proliferation of automated decision-making systems throughout government programs has shown great promise in promoting fairer organizational behavior and resource allocation. However, members of the communities impacted by this technology are often reticent to adopt automated systems that do not effectively account for or reflect their values and needs. In order to overcome community friction in response to algorithmic systems and develop more community-oriented decision support tools, we are taking a sociotechnical approach to enable community participation in designing algorithms for maintaining equitable and efficient resource allocation. 

Through collaborations with two Pittsburgh-based non-profits, 412 Food Rescue and Operation Safety Net, this interdisciplinary team will advance current research on algorthmic fairness and governance, as well as human-computer interaction, through the development of human-centered predictive AI for community initiatives. In addition to producing tools and design principles for fair, "human-in-the-loop" algorithmic designs, this project will generate guidelines for the equitable allocation of public services and a set of best practices for introducing and leveraging decision-making algorithms in public domains. 

Affiliated Faculty:
Min Kyung Lee, Assistant Professor, University of Texas at Austin
Ariel Procaccia
, Gordon McKay Professor of Computer Science, Harvard University

Perceptions of Fairness in Machine Learning

block-center-fairness-machine-learning.jpg

The deployment of artificial intelligence systems has raised serious questions surrounding the idea of fairness in these systems. Currently, machine learning experts have proposed and interrogated a series of mathematical understandings of fairness for evaluating these systems. However, many people affected by these AI systems lack the technical background required understand mathematical constructions of fairness. As a result, they often struggle to effectively judge and ultimately trust the machine learning algorithms that directly impact their lives. By failing to factor in these limitations, developers of AI systems may be overlooking a vital factor in the overall societal acceptance of machine learning. 

By taking a systematic approach to better characterize people's perceptions of fairness, this research aims to produce more easily understandable methods of explaining algorithmic decision-making processes and identify a notion of fairness that more holistically encompasses the values of the people affected by machine learning systems. 

Affiliated Faculty:
Maria-Florina Balcan, Associate Professor of Computer Science
Jason Hong, Professor, Human-Computer Interaction Institute
Ariel Procaccia, Gordon McKay Professor of Computer Science, Harvard University
Hong Shen, Systems Scientist, Human-Computer Interaction Institute