GHIAI Blog
The GHIAI Blog highlights diverse perspectives and innovative ideas at the intersection of global humanities and inclusive AI. Through contributions from our members, the blog serves as a dynamic space for exploring how cultural, ethical, and humanistic insights can shape the development and application of AI.
2024
Global Humanities and Critical AI Reading Group
Stephen Brockmann
The seminar began by framing the relationship between the humanities and “Artificial Intelligence.” Participants looked at and discussed the new journal Critical AI, and we also hosted the journal’s editor Lauren Goodlad for a lecture at Carnegie Mellon on Thursday, February 1 and a visit with the reading group on Friday, February 2. In our discussions with Lauren Goodlad, and also in exploring the editor’s introduction to the first issue of the journal as well as Katherine Bode’s and Lauren Goodlad’s “Data Worlds: An Introduction,” we identified a number of key themes and challenges that remained with us for the rest of the semester. The key themes that we developed over the course of the semester were:
- The definition of intelligence. Can Generative AI actually be called “intelligence” in any meaningful way? There was widespread recognition among participants that what we call AI is fundamentally based on massive amounts of data, particularly in the form of written words, sentences, paragraphs, and multi-paragraph texts. What Generative AI performs is essentially nothing more than a statistical calculation: what word is most likely to follow another word in a given context. Above all, then, AI is based on massive data sets that are then analyzed on the basis of statistical probability. Can this really be called “intelligence”? Does it constitute something like “understanding”? There was widespread skepticism among participants about this.
- Questions about the definition of intelligence led directly into additional questions relating to data sets and language. The data sets “read” by AI are by definition in machine-readable form, i.e. in writing. They are not oral. Moreover, they tend to exist primarily in a relatively limited number of major world languages, with an overwhelming global preponderance of English, followed by other major western and non-western languages such as French, German, Spanish, Turkish, Tamil, Indonesian, Italian, Japanese, Mandarin Chinese, Bengali, Portuguese, Arabic, Punjabi, Telugu, Javanese, Farsi, Korean, Italian, Russian, Hindi, and Urdu. However there are entire languages and language groups that are hardly, if at all, represented by AI: aboriginal languages from Australia, native languages in Papua-New Guinea, indigenous languages in North and South America, etc. And even within the major world languages that are represented in AI, data sets skew toward language as used in writing produced by particular, relatively privileged social groups and away from other, relatively less privileged groups.
- Issues relating to data sets and language led directly to questions about social justice and fairness. Is AI, as it is currently constituted and being developed, primarily a tool to reinforce and perpetuate existing social and cultural hierarchies (the dominance of English, the neglect of indigenous cultures, western and Eurocentrism, male dominance, etc.)? And if this is the case, might it be possible to come up with ways to use AI in order to produce different, more equitable and liberating outcomes? How might one go about such a goal, especially given that developers of AI, sociologically, tend to skew toward a more privileged class of people, wherever they are, and given that developers are still more likely than not to be male rather than female?
- There was widespread agreement in the first weeks of the seminar that there is a considerable amount of hype surrounding AI, including a large amount of negative hype that, in a seemingly paradoxical way, is often purveyed by some of the same people who themselves have major positions in the AI industry and in the development of AI. Even highly negative hype—such as the notion that AI is going to take over from humanity or control the world in the near future—can be seen as a form of advertising, boosterism, and propaganda. Seminar participants largely agreed that one should be skeptical of both positive and negative hype with respect to AI and its capabilities. AI can certainly be powerful, but it also clearly has its limits. Those limits are changing rapidly, but it is unclear at present where exactly AI is heading. Therefore our goal should be to steer it in a more positive direction.
- The group also looked at and discussed Kate Crawford’s book Atlas of AI. The primary issue coming out of that discussion had to do with the ecological consequences of the computer and software industries more broadly and of AI more specifically. The new industries that control computers, software, and AI inevitably present themselves as “clean” and “green” and radically different from the old polluting industries of the nineteenth and early twentieth centuries—such as steel, petrochemicals, and coal. However the industries, just like the old ones, are based on radical interventions in the earth’s ecosystems, including harmful and polluting mining for materials such as lithium. These metals are often located far away from major urban centers in the western world, and therefore it is easy for people living in those centers to ignore the consequences of the newer, seemingly “cleaner” industries—including consequences for the miners whose job it is to procure rare elements from the earth. Likewise computers and in particular AI use massive amounts of energy in the form of electricity—which is still largely produced by using coal or oil. Seminar participants were in general agreement that users of AI should make themselves more aware of the ecological consequences and challenges associated with the tools they are using. The computer industry only appears to be “clean.” In reality, it is a major polluter.
- There was also a general recognition among seminar participants that AI tends to rely on large numbers of relatively underpaid workers living primarily in non-western countries whose job is to analyze particular texts and images and determine whether they are harmful or problematic. AI algorithms are then based on such determinations. Seminar participants agreed that the ultimate users of AI in relatively privileged positions and countries in the West should be aware of this unseen and unappreciated work being performed by others elsewhere. To put it more concretely: in order to come up with algorithms that determine whether something is harmful, AI relies on a certain class of human beings agreeing to expose themselves to harm so that another class of human beings can avoid being exposed to such harm.
- In discussing Sam Lavigne’s Scrapism: A Manifesto, seminar participants sympathized with Lavigne’s call for more democratic access to large data sets. Many large data sets are currently proprietary: owned by large, wealthy corporations that limit and control access to them in various ways. Lavigne and most of the seminar participants favored more equitable and cheaper access to large data sets.
- Discussion of Ababe Birhane’s “Algorithmic Colonization of Africa” also focused on social justice issues and the fact that the major computer and AI corporations are based in the West, especially in the U.S. They are primarily by and for relatively privileged members of western societies or people residing in G8 countries. Birhane’s and others’ approach suggests that it is vital for people engaged in this industry to work toward a system in which residents of less-privileged regions can take control of their own algorithms. Exactly how this can be done is not clear, however, given the continuing western dominance of this industry.
- Toward the end of the semester the reading group continued to address the question of AI and indigeneity, particularly with respect to Noelani Arista’s article “Maoli Intelligence: Indigenous Data Sovereignty and Futurity.” Arista addresses the disconnect between the world of Native Hawaiians on the one hand, with their largely oral traditions and customs, and the world of computer and data science, in which relatively few Native Hawaiians are trained. Obviously one possible solution is that more indigenous people, including Native Hawaiians, need to be trained as data and computer scientists; but it is also clear that computer and data scientists need to be trained to be sensitive to the prejudices and assumptions built into any system that bases its algorithms primarily on large Western-created data sets and then the statistical probability of (usually English) word choice.
- In other words, as it is currently constituted AI tends to produce a replica of western, privileged, relatively uncritical “common sense” that often reproduces ethnic and cultural stereotypes and fails to question received assumptions—and that is in fact engineered to reinforce received assumptions. By definition it cannot “think outside the box” because it is trained to produce what is most common and its data comes from “within the box.” Therefore it has an inherent tendency to produce bland, seemingly unobjectionable material. The challenge for critical AI studies is to push AI in a different direction, even at the (inevitable) risk of discomfort.
- However the reading group did not reject AI or raise alarmist fears about it. The final reading, by Bing Song, addressed “How Chinese Philosophy Impacts AI Narratives and Imagined AI Futures.” Song points out that all three traditional Chinese philosophical traditions (Taoism, Confucianism, and Buddhism) tend to see human beings not as rigidly separated from the natural world but as part and parcel of it; and that all of these philosophies tend to recognize the world as governed by change and flux. AI is part of that change and flux.
- The challenge, then, is to push toward an AI that does not just reproduce and recreate an unjust, hierarchically organized world but that could potentially help to move humans, and the rest of the natural world, beyond that world and into a more just and equitable future. To paraphrase Karl Marx, AI has until now merely reproduced, reflected, and reinforced the unjust world around us; the challenge is to use AI as a tool for changing that world for the better. This is not impossible, but it will take careful thought, planning, discussion, and action.