Skip to main content
A doctor using AI

Revolutionizing Health Care: Harnessing Artificial Intelligence for Better Patient Care

Media Inquiries
Name
Peter Kerwin
Title
University Communications & Marketing

The integration of artificial intelligence (AI) provides one of the most promising — and fraught — advancements in the ever-evolving landscape of the health care sector. Electronic records let physicians access patient information more easily. Phone apps and web-based tools allow users to schedule appointments and check test results online. The COVID-19 pandemic illustrated how telehealth appointments can be a crucial tool in connecting patients with doctors.

Rema Padman(opens in new window), Trustees Professor of Management Science and Healthcare Informatics at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy(opens in new window), recently sat down to share some of her research and to discuss the opportunities and challenges of AI in health care. 

Empowering Patients through Digital Solutions

One key application of AI in health care is the development of digital solutions that enhance health literacy — a significant but understudied challenge in health care delivery.

“A health-literate patient is really an engaged patient, and an engaged patient has better outcomes,” Padman explained. “How can we leverage these technologies and analytics to create solutions that would be informative and engaging for people to access, and simultaneously improve their knowledge about managing their conditions?”

Digital therapeutics — using software and evidence-based insights to build digital solutions that aim to bring about behavior change — is one answer. When a doctor diagnoses a patient with an illness, the treatment often includes a prescription for medication. With digital therapeutics, the doctor might also prescribe a multimedia-rich video that explains the condition and another that helps the patient understand what the medication is, exactly when and how they should take it, and what the side effects might be. 

Padman said that there are several million videos on health-related topics alone on YouTube. While academic medical centers and large health care organizations like the University of Pittsburgh Medical Center (UPMC), Mayo Clinic and Johns Hopkins University create reliable content, there are also examples made by laypeople with no medical training, and strong, potentially erroneous opinions about a particular illness or treatment. 

Some videos, even from credible sources, might not take into consideration the specific needs of a particular population of people — such as dietary customs, nutritional needs, age, race or gender. Asking an already overtaxed doctor to sort through thousands of videos for a particular patient is unrealistic.

AI can help.

“What we’re doing is using AI, ML/DL (machine learning/deep learning) and natural language processing to curate these videos,” Padman said. “We are asking questions: Does the video contain accurate medical information? Is it understandable by the layperson? Is it at an eighth grade or a fifth-grade reading level? Is it laid out well? Is it audible? Is it short — under five minutes? Does it provide some kind of actionable guidance? Is it trustworthy? Is it fair and unbiased? Is it inclusive and representative?”

Rema Padman at the AI for Good Global Summit

Rema Padman shared insights about the use of AI in health care(opens in new window) in Geneva, Switzerland.

The AI model Padman is helping to develop filters all that video content to find a shortlist of videos. With that curated list, the medical practice can conduct a final review for accuracy and timeliness of content, and determine a few of the most pertinent videos that best fit their patient populations and keep them handy to prescribe when needed.

Padman is among the authors — an advisory group of 25 members including physicians, thought leaders and technologists from around the world — of a recent publication in the National Academy of Medicine’s NAM Perspectives(opens in new window) who have created criteria for evaluating the credibility of health-related social media content produced by nonprofit organizations, for-profit organizations and individuals. The achievement is noteworthy because having formal, agreed-upon standards in place is a key step in establishing trustworthiness.

Identifying credible sources is one challenge. Inclusivity and representation are similarly essential components.

Currently, much of the content deemed reliable is created by domain experts at health care organizations. While women have entered these ranks, few of the health-related videos are narrated by women or people of color. Padman highlighted what that discrepancy might mean for patients as a “rapport” problem.

As a person of South Asian descent herself, considering a patient who might be diagnosed with diabetes, Padman said that hearing information from a diabetes-educator, for example, who is familiar with South Asian cultural norms and food habits could help improve adherence to recommendations.

“When I see videos that reference five food groups and so on, that's not aligned with the way my meals are made,” Padman said. “And studies have shown that, especially among African American populations, Latin populations, and others, rapport is very important for uptake of guidelines, prescriptions and therapies.”

When someone’s health is at stake, inclusivity and fairness can be matters of life and death. The video prescriptions are a helpful tool, but not a perfect antidote.

“We cannot just blindly apply AI/ML and say the resulting videos are fully curated when it doesn't really meet the health-education needs of the population,” Padman explained. Explicitly evaluating the videos to improve inclusivity is one way Padman is trying to address and mitigate potential biases.

Digital Vaccines: AI + Gamification for Pediatric Health Literacy

Digital therapeutics can work for children as well as adults. Padman and FriendsLearn, the gamified mobile platform developer, have created a digital vaccine(opens in new window) and are collaborating with researchers from CMU, the University of Michigan, Johns Hopkins University, Hofstra University, and Voluntary Health Services in India to evaluate this mobile health gamified platform(opens in new window) that incorporates the science and technology of neurocognitive training and implicit learning via AI-enabled immersive gaming.

It currently targets early childhood nutrition literacy, physical activity and health-hygiene promotion in a fun and engaging way for kids; the project has the potential to expand to include other literacy needs such as mental health and wellness care. The goals are to improve healthy lifestyle knowledge, behavioral and clinical outcomes. Like a traditional vaccine, digital vaccines are meant to prevent disease. The difference is digital vaccines work by orienting the brain toward healthy behaviors, rather than stimulating a physical reaction in the immune system.

The fooya! app targets early childhood nutrition literacy, physical activity, and health-hygiene promotion in a fun and engaging way for kids.

The fooya! app targets early childhood nutrition literacy, physical activity, and health-hygiene promotion in an engaging way.

Padman’s team did a randomized, controlled trial (RCT) with a low-resource school in India during the fall of 2022. Another RCT is underway with the UPMC Children's Hospital of Pittsburgh, focused on children with Type 1 diabetes. The results are promising, and the project has already garnered attention; the project won the 2022 Transformational Solutions – New Frontiers Award(opens in new window) by the Financial Times and the World Bank's International Finance Corporation for pioneering deep tech breakthroughs with digital vaccines.

AI for Improved Chronic Disease Management

AI has the potential to be an invaluable tool in chronic disease monitoring and management(opens in new window). Risk assessment models enable health care providers to identify high-risk patients(opens in new window) and tailor interventions accordingly, but the models also have ethical implications related to bias. Effective, unbiased models depend on large quantities of comprehensive, unbiased data. If the data source — say, 1 million electronic health records — lacks information about Black women with heart disease, the model could produce an inaccurate prediction of risk for that illness for a female Black patient.

Effective tools also depend on technologists to ask the right questions, to identify potential blind spots that could result in bias, and to design AI models that result in equitable health care solutions. CMU’s Block Center for Technology and Society(opens in new window) was formed to consider issues such as this. For example, Rayid Ghani(opens in new window), Distinguished Career Professor in the Machine Learning Department and the Heinz College, has used ML to predict risk equitably for lead poisoning in children(opens in new window)

Good information allows doctors to make the right prediction for the right individual at the right time — ultimately resulting in better outcomes for patients(opens in new window).

Reducing a physician’s cognitive workflow is another potential benefit of AI. Currently, during a 15-minute visit with a patient, the doctor is often trying simultaneously to transcribe notes, complete insurance checklists, review past electronic health notes, and listen to the patient’s concerns. An AI system that could address the insurance requirements, summarize the patient’s history and highlight key aspects for the doctor to cover during the appointment would allow more time and cognitive bandwidth for the physician to focus on the patient. Padman and her colleagues have focused on combining AI and optimization to minimize the cognitive workload(opens in new window) that physicians experience while placing orders in hospital information systems. 

AI in Action: Interdisciplinary Team Triumphs

A recent project completed by graduate students at CMU and the University of Pittsburgh matched breast cancer patients with appropriate clinical trials. The interdisciplinary team(opens in new window) of Alexander Chih-Chieh Chang, Yuwei Guo, Shiqi Liang, Katelin Lauren Rimando Avenir, Aditya Singh, Rabira Tusi and Anirudh Vaidhyaa Venkatasubramanian included students from Pitt-CMU’s joint M.D.-Ph.D. program(opens in new window) as well as master’s-level students studying design,(opens in new window) information systems(opens in new window), and business intelligence and data analytics(opens in new window). Padman served as their faculty mentor.

More than 80% of clinical trials fail because of lack of patients(opens in new window)Among clinical trials that move forward, recruiting enough participants who fit the trial criteria and are representative of the larger population in terms of ethnicity and race is a significant challenge. The student team recently won first place and $35,000 in the Third Coast Augmented Intelligence for Health Equity Bowl(opens in new window) by developing a solution for these dilemmas. Their AIquitas tool utilized ML algorithms that intelligently compare patient records with clinical trial language and then evaluate compatibility scores.

Team AIquitas

Members of Team AIquitas celebrate after finishing first and earning $35,000.

During their presentation(opens in new window), the team noted that for all patients — and especially for people of color, because of the problematic history of unethical medical testing on racial minorities — having a trusting relationship with their physician is one important predictor in whether the patient might choose to participate in a clinical trial. With a tool like AIquitas, physicians would benefit from the efficiency and effectiveness of an automated trial-screening process, freeing up time that can be spent caring for and listening to patients. Patients would benefit from receiving personalized recommendations about which clinical trials are the best fit for them. Supported with such information, patients could more easily seek out trials directly.

The endgame is that every patient could better access clinical trials, regardless of their socioeconomic status and race. The team is in the process of publishing their work; they hope to pilot the tool at UPMC and eventually bring it to market.

Optimizing Operations With AI

AI can be particularly effective in streamlining operations within healthcare systems. Tasks such as scheduling or assigning patients to practice physicians could be accomplished efficiently with AI, if technologists could design the right parameters, ask the right questions and use the right data. 

Patient needs are complex and unique. For example, an otherwise healthy 20-year-old might require a quick appointment and prescription for antibiotics to address an infection. An 80-year-old with underlying health conditions or comorbidities would need more time and attention from the physician to provide effective treatment for that same infection. 

“We really need to understand these clusters of patient pathways, different groups of patients requiring different sequences of activities, tasks and needs,” explained Padman. “Once we understand the major categories of patients, then we can subsequently design efficient ways of tracking tasks and serving their health care needs.” 

Intelligent models could improve the clinical workflow — what Padman calls the “upstream to downstream” patient and clinician experience, from the time a patient calls to make an appointment throughout the care and follow-up process — considering whether the patient requires a specialist, has a preferred physician, is already being treated by multiple doctors, etc.

Why Getting it Right Matters

“Health care touches everybody,” Padman said. 

The integration of AI into health care holds immense promise for the future. From empowering patients of all ages, to assisting physicians with chronic disease management, to optimizing operations, AI has the potential to reshape health care delivery. The stakes are high to implement AI tools in a way that benefits all populations — and does not perpetuate existing inequalities.

“We have to think very deeply about these issues of accountability, ethics, liabilities, trustworthiness and biases,” Padman said, “and of course, to always keep in mind underserved and disadvantaged populations.”

Rema Padman

Rema Padman

— Related Content —