Funded Projects by Year
2023 Funded Projects
- CommunityAI: Supporting Community-centered Evaluation of Social Service AI via the AI Risk Reports: Hong Shen
Algorithmic decision systems have been adopted in a variety of high-stakes public social service domains, including child welfare and subsidized housing. The rapid implementation of AI in social services, however, has significantly outpaced existing capacity and investment in community engagement. Indeed, many of these AI systems have been developed and evaluated in isolation from the communities they are meant to serve. In many cases, this has led to ineffective results, or even detrimental outcomes — e.g., harming already disadvantaged social groups that are at the receiving end of social services.
- ParticipAI: Anticipating and governing future AI use cases and dilemmas with participatory frameworks and democratic processes: Maarten Sap
Recent technological advances have led to truly general-purpose AI technologies, such as ChatGPT and other large language models (LLMs), that can perform a wide variety of tasks based on a written description. This introduces *AI dilemmas*, i.e., possible AI use cases that simultaneously provide both benefits and harms to people (e.g., AI therapy apps, which can simultaneously cause but also prevent self-harm). Navigating and governing these emerging AI use cases and dilemmas in ways that cultivate public trust requires input from everyone, not just expert technologists and non-expert lawmakers as is currently done.
- Advancing Responsible AI Product Innovation: Jodi Forlizzi
In this project, we will leverage our prior work, in combination with NIST’s RMF, to develop and evaluate new resources, tools, and processes to advance ideation of responsible AI products. Our work will support teams in envisioning many concepts, helping them to more fully explore the problemsolution space, and sensitizing them to AI capabilities, limitations, and risks (e.g., fairness-related pitfalls). We will also prototype and assess a more rigorous process for rapidly evaluating many concepts in terms of value, risks, and responsibility.
- Community-driven AI evaluation: Empowering communities to drive the evaluation of AI systems that impact them: Ken Holstein
The goal of the proposed project is to answer the question: beyond engaging community members in AI evaluation at the conceptual level, how might we empower communities to directly drive the design, curation, and use of AI evaluation datasets? This project will help to operationalize the Map and Measure functions of NIST’s AI Risk Management Framework (AI RMF) by developing and empirically testing community-driven mechanisms that can support contextually appropriate assessments of AI systems’ expected risks and benefits. As part of this project, we will collaborate with NIST to directly support their new efforts toward more collective and community-based approaches to AI evaluation.
- Operationalizing AI Threat Modeling & Mitigation: Norman Sadeh
NIST’s AI Risk Management Framework (RMF) provides a basis for thinking about the risks associated with the development and deployment of AI solutions as well as the organizational processes and activities required to identify these risks and mitigate them. This framework shares many similarities with NIST's Privacy Framework. Risks associated with AI and with privacy overlap. They require similar cross-disciplinary approaches that involve a broad cross section of roles and demand analyzing risks from a multi-faceted perspective. Both privacy risk management and AI risk management should extend across the entire product lifecycle, starting from the earliest stages, with analyses repeated and refined over time. It is no accident that, according to a large survey published earlier this year by the IAPP, a majority of organizations have turned to their privacy practices to help set up and manage emerging AI governance programs [4]. We propose to build on our expertise in privacy engineering, a discipline we helped define over the past dozen years [1,2,3], including our expertise operationalizing privacy threat modeling frameworks, to develop similar models for AI risk management.
- Responsible Procurement of AI in Municipal Government: Hoda Heidari
AI is increasingly utilized in the public sector to optimize workflows and improve services. AI technologies are often not created in-house but procured from external vendors. This project explores how municipalities of varying sizes and demographics in the United States are currently procuring AI products and services. This study seeks to provide generalizable insights in response to two key questions. The first is descriptive: What are municipalities' current practices and challenges in procuring AI responsibly? The second question is prescriptive: Can we develop practical resources to scaffold and improve AI procurement processes? This study will allow us to take an initial step toward developing templates, checklists, training resources, and regulatory recommendations that close the identified gaps. The research will be conducted in partnership with several municipal governments, and its outputs will be made publicly available to maximize the potential for positive impact.
- Integrating Responsible AI Assessments in ML Workflows for Public Policy Applications: Rayid Ghani
The NIST AI RMF presents a unified framework that ML practitioners and policymakers can adapt to specific policy areas and use cases. While the NIST AI RMF takes urgent and critical steps toward formalizing the notions of risk around using AI to inform consequential decisions, much work remains to map the high-level framework elements to processes and tools that enable the operationalization of risk management elements in real-world use cases. In this effort, we will seek to address this gap through three concurrent streams of work, with more emphasis on the first two: (1) creating processes that map NIST RMF issue areas to the AI system scoping, development, and validation processes., (2) developing accessible and customizable toolkits that can directly implement the risk measurement and management elements on the AI model development workflow in real-world use cases, (3) training ML engineers and practitioners on implementing and customizing these processes and tools. We will anchor our work in concrete policy problems and use our existing partnerships to implement the proposed work.
- Guiding Future Civil Engineers in Responsible Generative-AI Transformation of Critical Civil Infrastructure Workflows: Pingbo Tang
This project will study “How to leverage Generative Artificial Intelligence (GAI) in assisting Architecture, Engineering, and Construction (AEC) professionals in critical civil infrastructure (CCI) projects while protecting workers, investment returns, and public welfare?” The United States’ ongoing infrastructure investment plan underscores the need to enhance the capabilities of AEC professionals in producing design, engineering, and construction contracts, data, and documents for CCI projects (e.g., roads, bridges, and water plants). Many CCI projects overload AEC professionals, who frequently leverage historical data and technical documents to produce AEC materials for new projects. GAIs, with their content generation capabilities, could harness the power of querying and reusing vast text and image datasets to allow AEC professionals to reuse historical project data, contracts, and technical documents in fast-track design and construction projects.
- Generative AI and the Arts: Understanding the Gaps to Creating a Skilled Creative Workforce: Daragh Byrne
Generative AI has become transformative across various creative fields, prompting both creative professionals and academic programs to adapt to the new technical and creative competencies it demands. This project aims to examine the disparities between the training provided by higher education programs and the requirements of industry practitioners in the context of generative AI. Through activities such as analyzing current courses, surveying alumni, and identifying skills gaps, the project seeks to inform educational policies and addressing skill gaps for current and future creative professionals engaging with generative AI in the workplace.
- Labor Shortages and Firm Search: Felix Koenig
Many firms report trouble filling vacancies, and many job vacancies go unfilled. Such labor shortages are pervasive across the skill spectrum, from highly specialized roles to jobs with less stringent skill requirements, like lifeguards or restaurant workers. Economists tend to think that shortages arise if prices (or wages) are wrong, and our study plans to analyze when and why firms adjust wages and non-wage policies over time to fill their jobs. We have partnered with a large staffing platform that conducts two randomized controlled trials (RCTs) to analyze potential explanations and solutions. A first RCT randomizes the wage for unfilled jobs by randomly adding bonus wages, allowing estimation of the labor supply elasticity to the firm. Early results indicate that raising wages is an effective tool for filling positions. A second RCT analyzes whether firms are interested in such wage adjustments if they are automated. The RCT presents hiring managers with different dynamic adjustment policies that our partner platform can implement on their platform. We plan to examine whether and under which conditions firms opt for automation of dynamic wage adjustment.
- Systemic Policy Evaluation of the Impact of Augmented Intelligence on Effectiveness of Decision-Making and its Role in Reducing Economic Disparities: Anand Rao
This research aims to explore the systemic impact of augmented intelligence on decision-making and its potential role in reducing economic disparities. Several studies over the last decade have examined the significant influence of AI and automation on the U.S. workforce, notably in job displacement and wage inequality. While automation has contributed to wage inequality, there's a growing emphasis on the potential of AI to augment, rather than replace, human capabilities, particularly with the recent increase in generative AI advances.
- Job Search at a Distance: Labor Demand Frictions and the Perceptions of Employers: Ashley Orr
We finished implementing the large-scale resume audit correspondence experiment that seeks to explore whether hiring firms have a detectable preference for or against job seekers from a distance. This proposal partially funded the completion of that causal investigation. Results will shed light on how job seekers at a distance can overcome a structural barrier to migration and employment, and how local economic development authorities might support both geographically mobile job seekers and local and non-local firms find efficient labor market matches.
- Learning in Human-AI Teams: Zhaoui (Zoey) Jiang
Although rapid developments in algorithms and artificial intelligence (AI) are occurring, these AI systems often do not operate fully autonomously but rather require humans and AIs to jointly make decisions. Because humans and AI systems have complementary skills and information, their joint decision making could be better than human or AI decision making alone.
Spring 2022 Funded Projects
- Developing Automation Policy to Ensure Worker Health and Safety in the Hospitality Sector: Jodi Forlizzi, Sarah Fox, and Chinmay Kulkarni
- AssetMappr: Building the Case for Informed Community Investments: Rick Stafford
- Jobs in an Appalachian Clean Energy Transition: A Regional Skills-Matching: Rick Stafford and Valerie Karplus
- Using New Transportation Options to Drive Low-Income Citizens to Greater Success: Beibei Li and Lee Branstetter
- Bridging Policy Gaps in the Life Cycle of Public Algorithmic Systems: Motahhare Eslami, Ken Holstein, and Sarah Fox
- Supporting Effective AI-Augmented Decision-Making in Content Moderation: Haiyi Zhu, Kenneth Holstein, and Steven Wu
2021 Funded Projects
Policy Fund Projects
- Policy Challenges, Intermediation and High-Frequency Equity Trading: Chester Spatt
- Reengineering the PA Commission on Sentencing Use of the Prior Record Score: Daniel Nagin
- Translating Transportation Electrification Investments: Javad Mohammadi, Osman Yagan, Pedro Moura
- Co-Developing Automation Policy for the Post-COVID Hospitality Industry: Jodi Forlizzi, Sarah Fox, Chinmay Kulkarni
Research Seed Fund Projects
-
Decentralized Risk-Limiting Election Audits: Aaditya Ramdas
-
Using Digital Courseware To Improve Educational Opportunities for Students in Underprivileged School Districts: Pedro Ferreira, Michael Smith
- Willingness to Pay for Workplace Safety: Felix Koenig, Massimo Anelli
- Supporting Responsible Use of Algorithmic Decision Support in Child Welfare: Kenneth Holstein, Alexandra Chouldechova, Steven Wu, Haiyi Zhu
Each year, more than 3.6 million referrals are made to child protection agencies across the US, putting pressure on the child protection system to prioritize investigative resources towards the children most in need. As agencies turns to new technologies to augment workers’ abilities to effectively prioritize cases, scientific knowledge remains scarce regarding how to support the responsible use of these technologies in high-stakes work contexts. In this project, researchers will investigate how to support responsible and effective use of algorithmic decision support (ADS) in child welfare decision-making, and aim to generate new knowledge regarding how agencies can improve fairness and equity in their decision processes and outcomes.
*See the dropdowns below for 2019-2020 project descriptions.
2020 Funded Projects - Detailed Descriptions
2020 Funded Projects
In 2020, we funded twelve projects that brought together Carnegie Mellon University’s world-class researchers in collaboration other academic institutions and local and national practitioners, applying cutting edge AI and machine learning techniques and proven social science-based approaches to address our most critical and timely societal challenges. Projects helped respond to the pandemic, address long-standing racial inequities and ensure the security and transparency of U.S. elections.
Developing technological solutions to combat COVID-19
The Impact of the COVID-19 Shutdown on the Most Vulnerable Households: New Data to Identify the Greatest Need (Lee Branstetter and Beibei Li)
In a matter of months, the COVID-19 pandemic and economic shutdown has severely jeopardized already marginalized and at-risk populations in ways unlike other economic recessions. To address the economic downturn, limited resources must be correctly allocated in a rapid and effective manner. Using large-scale GPS data in combination with data records maintained by Allegheny County’s Department of Health and Human Services, this project will equip policymakers and service providers with the tools to respond to the urgent medical, social and economic needs of this unprecedented crisis. For example, this project aims to characterize the loss of both “formal” and “informal” income, going beyond traditional metrics to examine the effect of the shutdown on the gig economy. Additionally, the insights gathered from this project will help to bolster contact tracing efforts and evaluate the impact of the pandemic on income loss, mental health, housing instability, and other social factors that are difficult to identify and measure with existing data resources.
Designing Better Autonomous-Transit Systems for Enhanced Workforce Resilience (Corey Harper and Destenie Nock)
From wildfires to the current pandemic, there is an increasing demand to identify new systems that meet critical transport needs without putting drivers at risk. Autonomous vehicles could meet such a need while bolstering a more environmentally friendly transportation infrastructure. During the current COVID-19 crisis, transportation systems sit at the crux of quarantine management and workforce protection, as essential workers that rely on public transportation and transit workers are at high risk of exposure to the virus. Autonomous vehicles could support contactless delivery, emergency response and evacuation, and essential worker commuting. This study will consider the workforce implications and climate impact of integrating autonomous vehicles into transit systems, as well as develop practical tools to inform policy regarding autonomous vehicle adoption and infrastructure development.
Co-Developing Automation Policy for the Post-COVID Hospitality Industry (Jodi Forlizzi, Sarah Fox. and Chinmay Kulkarni)
The COVID-19 pandemic has had a devastating impact on the hospitality industry, causing this major sector to effectively grind to a halt. In April, a staggering 98% of members of UNITE HERE, the largest hospitality workers union in North America, had been furloughed or laid off. While automation is a promising avenue to revitalize this sector and complement human labor, there are reasonable concerns about displacement or a lack of employee input into its integration. In partnership with UNITE HERE, we are co-designing automation policy and visioning technology that addresses the needs and concerns of workers in the hospitality industry, ensures that work in this sector is safe and fulfilling, and supports policymaking and collective action.
An Open-Source Decision Tool to Identify and Support Responses to Emergent Constraints in the Medical Supply Chain (Erica R.H. Fuchs, Valerie J. Karplus, M. Granger Morgan, and Sandra DeVincent Wolf)
In the early stages of the COVID-19 pandemic, a supply chain breakdown resulted in a shortage of the elastic for ear loops, preventing the production of at least 9 million additional medical masks. This is just one example of the myriad ways in which the pandemic has emphasized the significant global interdependencies within the health and manufacturing sectors, as well as hampered these sectors’ ability to meet the increased demand for critical PPE. While identifying and responding to these weaknesses in the medical supply chain is now particularly important; the supply chain issues highlighted by COVID-19 are a long-term problem. In order to overcome existing and future bottlenecks and capacity constraints, we are working with Catalyst Connection, the Manufacturing Extension Partnership of Southwestern Pennsylvania, to develop an open-source decision tool that uses publicly available data to inform domestic and international manufacturing responses to resource demands and guide innovation to meet supply constraints.
PaCE: Developing a Pandemic Consumption Expenditure Index (Laurence Ales, Rebecca Lessem, Christopher I. Telmer, and Ariel Zetlin-Jones)
OpenTable restaurant reservations rapidly collapsed two weeks prior to Pennsylvania Governor Tom Wolf’s Shelter-In-Place order. Movie theaters in Georgia remain empty in spite of the state government’s decision to re-open them on April 27. These are just two examples suggesting that consumer behavior—not government mandate—will be the key driver of the economy’s transition from lock-down back to economic health. Through the development of a Pandemic Consumption Expenditure Index (PaCE), we are examining consumer behaviors in real-time in order to minimize the economic impact of the COVID-19 pandemic and bolster critical decision-making for policymakers, businesses and workers through each phase of lockdown and reopening. Beyond COVID-19, PaCE will help policy makers better understand how other pandemic-like shocks to the economy are impacting consumer behavior at a level far more granular and relevant than existing data sources.
The Role Of Co-Experience And Technology In Mitigating Isolation From Social Distancing (Erin Carbone, Laura Dabbish, Simon DeDeo, and George Loewenstein)
The prolonged isolation caused by COVID-19 has translated into depression and other mental health issues for nearly half of American adults. Though COVID-19 has drastically exacerbated the problem, social isolation was already a long-term problem threatening the elderly, mentally ill and other vulnerable populations, and will continue to be when COVID-19 recedes into the past. While technology will never be a perfect substitute for in-person interaction, it can help mitigate the effects of social isolation. This project considers the impact of virtual interactions on loneliness, social connectedness, and mental wellbeing through a psychological and behavior lens, to identify and test the viability of developing sufficient substitutes for in-person interaction by leveraging the design features inherent to video chat, phone calls, text messaging, and other communication channels. The specific focus of the project is on the importance of the simultaneity of the experience – i.e., knowledge that the individuals are experiencing the same thing at the same time – as well as mutual awareness of that simultaneity.
Equity and Fairness in AI
Fast and Fair Hiring via Segmented Evaluations
On average, a given job listing in the United States will attract applications from 118 candidates. As widespread job losses in certain sectors coupled with surges in labor demand in others have drawn attention to several major flaws in the hiring process, this project considers a novel human resources approach to counteract long-standing biases and inefficiencies within the hiring process. Traditionally, an individual hiring manager or a small group of hiring managers review a complete application to decide whether or not to hire an individual. However, there is significant evidence that the process is not only inefficient, but often biased against people of color, immigrants, and other marginalized demographic groups. To address these issues, the research team has developed a novel blended human-AI approach where applications are evaluated in a segmented manner. Throughout the year, the team will test and study the new approach to measure improvements in equity and efficiency.
Understanding Historical Biases in EEG Data and Neuroscientific Studies Because of Bias in Data Acquisition Systems (Pulkit Grover, Shawn K. Kelly, Ashwati Krishnan, and Christina M. Patterson)
Electroencephalography (EEG) systems have been a common medical practice to identify brain conditions for nearly one hundred years, yet only recently a team of researchers at Carnegie Mellon University determined that common EEG machines are less effective for patients with coarse and curly hair, which is common in individuals of African descent. This significant gap has shed light on the sampling bias inherent to past EEG data, which informs clinical decision-making regarding stroke, epilepsy, and other neural conditions. Through a comparative analysis of neuroscientific data gathered using traditional EEG systems and “gold-cup” electrodes developed to work with coarse and curly hair, this project aims to quantify statistical differences between these two methods and explore algorithmic techniques for correcting bias in past EEG data, as well as incentivize clinicians and neuroscientists to switch to newer, fairer EEG systems.
An Integrated Framework for Studying and Regulating Human-AI Hybrid Decision-Making Systems (David Danks and Zachary Chase Lipton)
From generating risk scores that inform lending decisions to helping to narrow a pool of job applicants, algorithms are increasingly used as a predictive tool to improve the choices made by human decision-makers. However, little is known about the ways in which humans interpret, use, and trust AI-supported tools. Through a series of behavioral experiments, this project aims to better understand how AI-supported tools actually impact human judgement and evaluation patterns. These experiments will include examining how participants react when an algorithm supports or refutes their initial judgment, how access to additional information impacts decision-making, and how the benefits of correct judgments and costs of incorrect decisions affect participants’ behavior.
Laying the Technological Groundwork for Child Welfare Decision Support Systems through Advanced NLP (Alexandra Chouldechova, David Steier, and Yulia Tsvetkov)
Child welfare agencies across the country are continually looking for ways to use their data to better support families and improve decisions at every stage of their processes. Recent efforts to develop machine learning tools for child welfare, such as those in Allegheny County, PA and Douglas County, CO, have primarily focused on structured administrative data. In this project we’re partnering with the Allegheny County Department of Human Services on an innovative project that will leverage unstructured free-text data to support case management, supervision, and quality improvement efforts. We are developing advanced natural language processing technologies capable of using high volumes of both structured and unstructured data to assist service providers, caseworkers, supervisors, and other DHS staff.
Making Explainable Machine Learning Work for Public Policy Problems (Kasun Amarasinghe, Rayid Ghani, Kit Rodolfa, and Ameet Talwalkar)
As applications of machine learning and AI are rapidly expanding into new areas of public life, it is important to consider whether public sector decision makers adequately understand the outcomes of the algorithms in a way that leads to optimal outcomes for society. In order to address this gap, we are exploring the applicability and effectiveness of existing approaches for explainable machine learning in public policy contexts. This project will examine the impact of improved explainability on policy outcomes related to increasing high school graduation rates, preventing adverse police interactions with the public, supporting mental health interventions to break the cycle of incarceration, and reducing long-term unemployment. Based on these results, we aim to generate a set of guidelines for governments, non-profits, and policymakers who are procuring machine learning systems, as well as for the researchers and practitioners who might be developing systems for use in public sector decision-making.
Improving the transparency, accuracy and effectiveness of U.S. Elections
Developing and deploying risk-limiting audits with continuous monitoring (Aaditya Ramdas and Michael I. Shamos)
According to a recent Gallup poll, 59% of Americans say they are not confident in the honesty of U.S. elections. Public trust in elections is integral to democracy. At the same time, manual election recounts when voting results are called into question cost time and money, and may not always be possible due to lack of paper trails. Even the process of deciding to perform a recount is generally ad hoc and triggered by post-election concerns. This project aims to use rigorous statistical tools to continuously monitor the audit and potentially end it early, as soon as it can be verified with high confidence that the announced result is correct. The increased efficiency of auditing will lower the time and money involved without sacrificing legitimacy, and encourage more states to normalize the creation of paper trails as well as sound post-election audit processes. Through better election transparency, we aim to support the restoration of trust in the democratic process, while maintaining accuracy in determining election outcomes.
2019 Funded Projects - Detailed Descriptions
AI for Good
Evaluating People’s Perceptions of Fairness in Machine Learning (Jason Hong, Nina Balcan, Ariel Procaccia, Hong Shen)
One critical question facing today’s ML/AI practitioners and public policy makers is: in a given AI system, which fairness notion is most acceptable to the people affected by the system? This project aims to develop better ways of presenting the results of machine learning models to lay people. This will be investigated using crowdsourcing (e.g., Mturk) to measure and aggregate people's perceptions of fairness in a scalable manner.
Improving Breast Cancer Diagnosis w/Interpretable Multimodal Machine Learning (Zachary Lipton, Adam Perer)
This project’s goal aims to increase the effectiveness of breast cancer screenings by using AI to enhance current processes for continually training radiologists using clinical data. This system will help doctors to discover their strengths, weaknesses, biases, and blind spots, providing feedback that can help them to improve their readings, as well as identifying cases that could be discussed in department reviews to better drive departmental education.
Counterfactual Risk Assessment for Improved Decision Support in
Child Welfare Services (Alexandra Chouldechova, Edward Kennedy)
While access to linked administrative data is increasingly available, it is difficult for child welfare workers to make systematic use of historical information about all the children and adults on a single referral call. Predictive analytics offers a way forward. By building risk assessment models using routinely collected administrative data, this project aims to better identify cases that are the most likely to result in adverse child welfare outcomes. These risk estimates could then be supplied to call workers in real time to help them prioritize cases for investigation and the offering of services.
Uncovering the Source of Machine Bias (Yan Huang, Param Vir Singh, Duyu Chen)
This project addresses machine biases by examining the potential sources of these biases. In particular, it seeks a deeper understanding of how human bias generate different separations for different groups, which then translate into different separations in machine-assisted decision-making, in cases where the data-collection process is endogenous. In the next phrase of the project, researchers will test their proposed method with real-world hiring data. They will collaborate with organizations that aim to automate their hiring and job performance evaluation practice.
Future of Work
Gigs, Risks, and Skills (Erina Ytsma and Geoff Parker)
This project studies the value of job security to low-skilled workers, the demand for skills training by blue-collar workers and employers, and the role that online labor market platforms can play in skills training and income risk mitigation. The first objective of the project is to estimate the demand for income risk mitigation for blue collar workers, and to assess the feasibility and profitability of online labor market platforms providing some level of income insurance. The second objective of the project is to understand the demand for general skills training by both the supply- and demand-side of the low-skilled labor market.
Entrepreneurship and the Platform Economy (Matthew Denes)
This project investigates whether platform-based jobs can incentivize and support entrepreneurship by providing nascent business owners with an additional source of income. In addition to characterizing the size and scope of the platform economy, this research group is evaluating the impact of gig employment on both income volatility and entrepreneurial activity. These insights could inform the development of policy to support entrepreneurship and protect gig workers.
Societal Futures
Better Videos for better Education (Michael Smith and Pedro Fereirra)
The project’s aim is to help create a center of excellence at CMU studying how technological change in the education market will impact students, educators, and society as a whole, and to use rigorous academic research to positively influence these outcomes. The researchers’ immediate goal is to analyze the impact of technological change in the context of video instruction. To the best of their knowledge, there are no large-scale systematic studies analyzing whether and how students learn with video.
Diversity and Inclusion in Open Source Software Development (Laura Dabbish, Jim Herbsleb)
This projects proposes empirical research on diversity and inclusion in open collaboration software projects to inform the design of socio-technical interventions to enhance participation of women in open source. The researchers’ intended broader impact is to extend the design of these environments and to engage in a dialog with the technical community entrusted with shepherding open source software. The goal of this work is to support design and management of open collaboration organizations so that these systems support effective career growth for females and inclusive participation by a diverse population.