New Paper Co-authored by Tepper School Researchers Articulates How Large Language Models Are Changing Collective Intelligence Forever
Within teams, organizations, markets, and online communities, ideas from a larger group can help to solve complex problems. Large language models (LLMs) are emerging as powerful tools to unlock even greater potential. Picture an online forum where thousands of voices contribute to a solution, and an LLM synthesizes these diverse insights into a cohesive, actionable plan.
A new paper highlights how LLMs can reshape collective intelligence, which is a shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making; offering enhanced capabilities and potential risks. This paper, co-authored by researchers at the Tepper School of Business at Carnegie Mellon University and several other institutions, articulates the impact of LLMs on group decision-making and problem-solving.
Published in Nature Human Behavior, the research highlighted the dual nature of LLMs as tools and products of collective intelligence, emphasizing their role in information aggregation and communication.
"LLMs provide unique opportunities for enhancing group collaboration and decision-making, but their use requires careful consideration to maintain diversity and avoid potential pitfalls."
- Anita Williams Woolley, a co-author and professor of organizational behavior at the Tepper School.
Woolley and her co-authors considered how LLMs process and create text, particularly their impact on collective intelligence. For example, LLMs can make it easier for people from different backgrounds and languages to communicate, which means groups can collaborate more effectively. This technology helps share ideas and information smoothly, leading to more inclusive and productive online interactions.
While LLMs offer many benefits, they also present challenges, such as ensuring that all voices are heard equally.
"Because LLMs learn from available online information, they can sometimes overlook minority perspectives or emphasize the most common opinions, which can create a false sense of agreement," said Jason Burton, an assistant professor at Copenhagen Business School.
Another issue is that LLMs can spread incorrect information if not properly managed because they learn from the vast and varied content available online, which often includes false or misleading data. Without careful oversight and regular updates to ensure data accuracy, LLMs can perpetuate and even amplify misinformation, making it crucial to manage these tools responsibly to avoid misleading outcomes in collective decision-making processes.
The article emphasizes the importance of further exploring the ethical and practical implications of LLMs, especially in policymaking and public discussions. The researchers advocate for the development of guidelines for using LLMs responsibly, supporting group intelligence while maintaining individual diversity and expression.
Additional authors on the study include Ezequiel Lopez-Lopez, Shahar Hechtlinger, Zoe K. Rahwan, Samuel Aeschbach, Julian Berger, Stefan M. Herzog, Ralf H. Kurvers, and Dirk U. Wulff, all of the Max Planck Institute for Human Development; Lucie Flek of the Bonn-Aachen International Center for Information Technology, University of Bonn, and the Lamarr Institute for Machine Learning and Artificial Intelligence; Michiel A. Bakker of Google DeepMind; Joshua A. Becker of the UCL School of Management; Aleks Berditchevskaia of the Centre for Collective Intelligence Design, Nesta; Saffron Huang and Divya Siddarth of the Collective Intelligence Project; Sayash Kapoor and Arvind Narayanan of Princeton University; Taha Yasseri of University College Dublin; Abdullah Almaatouq of MIT Sloan School of Management; Ulrike Hahn of Birkbeck, University of London; Susan Leavy of University College Dublin; Iyad Rahwan of the Center for Humans and Machines, Max Planck Institute for Human Development; Alice Siu of Stanford University.