Carnegie Mellon University
faculty

Faculty

Our faculty are the core of our expertise in cutting-edge areas of research in intelligent business. Below, we highlight six that span the various themes of the center.

yanhuang-min.jpg

Yan Huang is an Associate Professor of Business Technologies at the Tepper School of Business. Her research examines the use of machine learning through an economic lens, evaluating the social and economic consequences and implications of machine learning algorithms, specifically in the context of lending decisions.

  • Professor Huang and her co-authors consider algorithmic transparency within the lending market: One way algorithmic lenders can differentiate themselves is through the information they provide to their applicants – whether they provide personalized approval odds (generated by their algorithms) to consumers. In unregulated markets, lenders tend to engage in asymmetric disclosure (some lenders disclose while others do not) to create differentiation and soften price competition. Some lenders' strategically withholding information potentially leads some consumers to make sub-optimal decisions when pursuing credit. Unfortunately, markets with mandated disclosure may reduce the incentives of lenders to invest in the accuracy of their algorithms. Thus, achieving both efficiency and algorithmic transparency may require other initiatives.
  • Professor Huang and her co-authors also consider the quality of algorithmic lending decisions: Whether ML decisions are better than human decisions in a specific peer-to-peer lending market. They find that in their dataset that the ML algorithm can indeed be superior to human decision-making – for both the investors (who see higher return) and the borrowers (who see more lending opportunities). One downside of their findings is that the algorithm can inherit bias from human decisions.
  • Focusing now on algorithmic bias, Professor Huang and co-authors study the sources, evolution and impacts of bias in a micro-lending setting. They begin by studying the origins of bias within human evaluators, finding that in their data lenders tend to be biased against male borrowers who otherwise may be good applicants. Professor Huang and her collaborators then evaluate how these biases can be inherited by a ML algorithm, as well as how the inherited biases can be dampened through appropriately training the algorithm. This work provides insights into how to best reduce bias in lending decisions made by human and ML decision-makers.

Professor Huang is developing a robust research portfolio investigating the effects of ML decision making within an economic framework, providing insights into how these algorithms may evolve, and how this evolution can be improved.

tae wan kim

Tae Wan Kim is an Associate Professor of Business Ethics at the Tepper School of Business. He is a globally recognized leader in the fields of machine ethics and ethical AI. His research focuses on several of our most pressing questions around the ethics of artificial intelligence:

  • What rights should individual members of society enjoy when important decisions are being made by artificial intelligence? Across several papers with a variety of co-authors, Dr. Kim argues that individuals have a Right to Explanation, or Right to Transparency when decisions are being made partially or wholly by automated algorithms. This right stems both from the Right to Consent – as data and information are being used in new and unforeseen ways to make societal decisions, there is a right to understand how this information is used – and the Right to Privacy – as a form of protection of those whose data is used.    
  • How should machines be trained to make ethical decisions? Here, Dr. Kim (along with John Hooker and Thomas Donaldson) presents the idea that machine ethics must be grounded both on ethical reasoning and empirical observation; decisions made solely on empirical observation are vulnerable to algorithmic bias. Using “value augmentation” with ethical reasoning can potentially overcome such biases.   
  • How will the use of Artificial Intelligence affect workers? Again with several co-authors, Dr. Kim explores the impact on workers that increasing usage of artificial intelligence algorithms will have – what is often called the fourth industrial revolution. Leveraging principles of both Eastern and Western ethics, Dr. Kim interrogates what society can require of businesses, and the role businesses can play to help ensure a fulfilling existence in these rapidly changing times.

Dr. Kim also works at the intersection of ethics and specific industries or technologies – for example blockchain or medical transplantation. His research provides a compass for businesses trying to navigate the difficult ethical questions of our times.

aali1-min.jpg

Andrew Li is an Assistant Professor of Operations Research at the Tepper School of Business. His work develops and applies cutting-edge analytical techniques within artificial intelligence algorithms to solve fundamental problems in practice.

  • Online Recommender Algorithms: Optimizing recommender systems within an AI framework is made difficult by the fact that unlike standard models, there is a practical limit on the number of times an option can be utilized (i.e., a recommendation for a match or product can be made). Professor Li and his co-authors devise a novel mathematical characterization for this problem, as well as an efficient heuristic solution. Experiments show that their solution outperforms other algorithms for the recommender problem.
  • Markdown Pricing: This is another practical problem (pricing) with unique constraints (that prices increase upset customers), which reduces the performance of standard AI techniques used in general pricing algorithms.  Professor Li and his co-authors construct novel algorithms for their “markdown” setting, mathematically showing near-optimality. Their results also establish that a loss in efficiency is unavoidable when the search for optimal prices is constrained to be decreasing.  
  • Inexpensive, Accurate Biopsies: Professor Li and his co-authors model the development of “liquid biopsies” – performing genetic testing on a small blood sample to provide early detection of cancer. While this is biologically possible it is cost-prohibitive, due to the vast number of genetic locations that need to be tested. Professor Li and his co-authors tackle the problem of dynamically determining an efficient sequence of locations to test, which can provide high accuracy while constraining the total cost. Such an algorithm could provide the foundation of a practical and effective liquid biopsy methodology.

Professor Andrew Li’s work is helping to realize the potential of AI – creating the algorithms that enable AI to be meaningfully applied to practical problems in pricing, search, healthcare (and many other settings).

ben moseley

Ben Moseley is an Associate Professor of Operations Research and Machine Learning (by courtesy) the Tepper School of Business. His work develops the tools and techniques that help create the “intelligence” in Artificial Intelligence. Specifically, AI relies on algorithms – algorithms that are fast, accurate and efficient. Professor Moseley develops the theory that underlies the algorithms for AI operations such as:

  • Massively Parallel Optimization: One of the breakthroughs in modern computing was the ability to parallelize algorithms on a massive scale, enabling us to “divide and conquer” heretofore intractable problems. But solving such problems requires tracking and manipulating massive amounts of data across servers, and coordinating operations, as the algorithm seeks its final answer. Moseley's work helps provide structure for these algorithms, proving basic principles that can guide the development of larger, faster algorithms.
  • Smarter Searches: Underpinning all AI is the ability to classify and search over massive quantities of data. Two foundational algorithms for these operations are clustering – which efficiently segments unstructured data into like sets for further analysis – and active search – which identifies all elements of a given class within a given search budget. Across several papers Dr. Moseley develops state-of-the-art algorithms for these tasks, balancing the principles of exploring for better solutions with exploiting the current best solution. 
  • Algorithmic Training: Despite the incredible advances AI has made over the past few years, the fundamental task of extracting information from tabulated data and then conducting structured learning over this data remains inefficient. Dr. Moseley’s work both helps explain why these inefficiencies occur (and why they cannot be completely eliminated) and provides a path to significant improvement (via a “pseudo gradient” search method). 

As we ask more of AI – more calculations, faster search, ability to optimize over more data – we will need to provide more to AI as well, specifically smarter, faster algorithms to process, parse, search and understand. Dr. Moseley’s work helps build these next-generation algorithms. 

bryan routledge

Bryan Routledge is an Associate Professor of Finance at the Tepper School of Business. His work considers the application of machine learning to problems in finance, specifically the pursuit of text mining to glean latent insights from company data. Examples of his research include insight on:

  • Sometimes a company’s earnings announcement will be drastically different from Wall Street estimates – an Earning Announcement Surprise. A key question is how this “surprise” will affect the company’s stock price (more specifically, how it might drift after the announcement).  Professor Routledge and his co-authors construct a new numerical measure that uses machine learning to generate a prediction of post-earnings-announcement drift. Compared to standard measures, this drift is often larger in magnitude, potentially because it better incorporates details behind the announcement and the company fundamentals that the machine learning can be trained to capture.
  • It has long been recognized that there is a significant potential for the application of machine learning to aid (or possibly even replace) human estimates of business fundamentals due to the vast amount of financial data produced by firms. Unfortunately, the volume and complexity of this financial data, the need for complex numerical reasoning, and the burgeoning desire for “explainability” make this a very difficult task for a machine learning algorithm. Professor Routledge and his co-authors construct a new, large-scale financial dataset enabling the experimentation with common ML models. They show that these models typically fall short of human experts in acquiring and analyzing financial data. This highlights the significant potential for future development of specialized algorithms, a pursuit that could be facilitated by Dr. Routledge and his co-authors’ public dataset.
  • Given that machine learning algorithms are becoming increasingly utilized in financial decisions, a central question becomes how individuals might use this data in asset allocation decisions. By emphasizing decision-makers' preferences for simplicity, Dr. Routledge provides insights – via preference parameters – on how humans incorporate data via machine learning into complex financial decision making. This gets to a fundamental question – not how machine learning makes financial decisions – but how humans use machine learning to make financial decisions.

Modern finance provides one of the most compelling use cases for artificial intelligence and machine learning. Dr. Bryan Routledge, through his research, is helping to explain how this field may develop.

psidhu-min.jpg

Param vir Singh is the Carnegie Bosch Professor of Business Technologies and Marketing at the Tepper School of Business. His work brings insights from economics and machine learning to answer questions around consumer behavior and marketing.

  • The ubiquity of information on consumer behavior from reviews and social media could potentially usher in a golden age of consumer prediction. Unfortunately, the sheer volume of data, combined with its heterogeneous, complex, and unstructured nature, makes this an extremely difficult proposition. Professor Singh, along with his co-authors demonstrate how machine learning and text mining can leverage social media data to produce meaningful and accurate predictions, specifically by focusing on the information content and timing of tweets. They find that the real-time information contained in the tweets – often composed during consumption – make them more valuable for predictions than other text-based media.
  • Turning from tweets to photographs, Professor Singh and another set of co-authors investigate how Airbnb demand changed with the advent of the posting of verified (Airbnb) images, and what characteristics of the images play the largest roles in affecting demand. Their analysis shows that the use of verified images does indeed increase demand. Potentially of even greater importance, they also provide a potential explanation for this effect: Application of deep learning on the photos identifies the image attributes that are most likely to evoke a positive response from patrons. Crucially, they show that the verified images more commonly possess these positive attributes, implying it is the quality of the photographs that likely leads to this effect.
  • Turning next to algorithmic transparency, Professor Singh (and co-authors) explore the question of How transparent should companies be? Firms are commonly reluctant to expose the inner workings of their AI algorithms, possibly out of fear that if made public, consumers might be able to game the algorithm and reduce its efficacy. This poses a particular problem as society increasingly calls for greater algorithmic transparency. Providing a possible answer to this conundrum, Professor Singh and his co-authors identify a broad set of conditions under which making an algorithm transparent can benefit a firm, by potentially motivating users to invest in more desirable features. 

As information expands, marketers can gather ever more information about their consumers. Professor Param vir Singh’s research answers questions around how this information can effectively be used, and how it can be used transparently.