CIB Research Fellows Investigate GenAI Applications in Medicine, Finance, Marketing, and More
By Dr. Emily Barrow Dejeu
What happens when you combine powerful generative AI technology with some of the most pressing questions in medicine, marketing, economics, finance, and human behavior? A group opf Carnegie Mellon University researchers are doing just that—pushing the boundaries of generative AI (genAI) to explore not just what these tools can do, but how they might be used responsibly and effectively across a range of real-world domains.
Some of this work explores creating new AI-powered models to solve tough medical and financial problems:
- Sridhar Tayur and his collaborators propose a new framework – GenPrefBO – that leverages synthetic data and expert insights to determine optimal drug doses for individual patients, which enhances treatment effectiveness and minimizes side effects. This AI-powered framework could eventually improve patient outcomes and potentially transform the field of precision medicine.
- Pierre Liang and his collaborator address the complexity of financial statement analysis by applying graph- and network-based tools to publicly available data in an effort to make analysis faster and more streamlined. Their preliminary work involved creating a large dataset of graphs; their next steps will involve comprehensively exploring these graphs and deploying anomaly detection tools to see how well their model can detect anomalous data in an unsupervised manner. Ultimately, their approach should make it easier for firms to detect potential data anomalies that deserve further scrutiny.
Other projects investigate the role that AI can play in marketing by analyzing consumer preferences and improving recommendation algorithms:
- Zoey Jiang explores a method for using generative AI tools to build a contextual consumer choice model that could help companies supplement or even replace costly traditional methods of gathering customer preferences, such as survey or focus group methods. This model could eventually inform key marketing and operational decisions, such as product development and modes of customer engagement. Jiang has already acquired an empirical dataset and developed initial pipelines and is beginning work on creating the model.
- Andrew Li researches practical, theoretically-sound guidelines and algorithms for applying generative language tools to personalized recommender systems – a step that companies like potify, Amazon, and the otherleaders in media and e-commerce are already exploring. Li’s team has already created an algorithm that works on various types of neural network architecture and begun experimenting with real-world datasets.
Finally, other projects investigate the intersection of AI and human decision making, critical thinking, and ethical reasoning:
- Woody Zhu tackles the problem of information overload, where human decision-makers have to select the best solutions amidst an ever-growing set of choices. He researchers a solution that keeps human judgment and critical thinking at the forefront while also reducing and streamlining information inputs – a novel framework he calls generative curation. This framework offers multiple thoughtfully curated solutions to human decision-makers, enabling them to choose the most desirable one while actively involving them in the decision-making process. Zhu has built and tested the framework and is now working on how to integrate human feedback into the model to improve its outputs.
- Yucheng Liang and his collaborator explores how aware (or not) people are of their own preferences when they search for information relevant to making an economic decision, and how this meta-awareness affects information acquisition. They developed a gen-AI powered experiment whose preliminary results suggest that seeing a model question helped participants improve the questions they ask and that information-seeking altered participants’ judgments. Liang and his team are currently conducting follow-up experiments.
- Taya Cohen and her collaborator research the considerations people have when deciding whether and how to use AI-powered tools for their work. They focus specifically on a moral character trait called guilt proneness, which anticipates how likely someone is to anticipate feeling guilty before doing something wrong. Results from their recent survey-based study of 200 U.S.-based workers show that guilt proneness positively correlates with moral awareness and ethical considerations; specifically, having higher guilt proneness influences the sensemaking and considerations individuals have when deciding whether or not to use AI tools at work. Cohen and her team are now conducting a second version of the study with international AI experts.
- Derek Leben researched currently existing safety benchmarks for LLMs, identified the normative categories used, and analyzed the strengths and weaknesses across different benchmarks. Following this analysis, Leben developed a framework for evaluating existing benchmarks, or creating a custom benchmark that aligns with the values of a company and the context of the LLM deployment.
Together, these projects reflect a growing recognition: GenAI isn’t just a new technology—it’s a new lens for understanding complex human systems. Whether it’s designing better analytical tools or exploring how humans interact with cutting-edge technology, this research aims to not only advance the capabilities of AI but to ensure those capabilities are developed with human needs, contexts, and judgments at the center.
Explore more about the CIB's genAI fellows work