AI and the Arts Incubator Fund (AIxArts)
The AI and the Arts Incubator Fund seeds new collaborative partnerships that strengthen, broaden and diversify innovation and experimentation at the intersection of artificial intelligence and the arts.
AIxArts provides 6 months of funding (up to $25,000) with a view toward seeding new research and creative inquiry, initiating partnerships and developing a proposal for future funding.
The fund is open to proposals where there is at least one CFA faculty member and at least one collaborator from outside CFA. These collaborators may be CMU faculty, CMU research staff or external partners.
Applications for 2025 are closed.
Eligibility
Timeline
Application Process
Proposal Evaluation Criteria
Allocation & Funds
Terms & Reporting
Jury
2025 AIxArts Incubator Fund Recipients
Funding was made possible by the Carnegie Mellon University Provost. Additional funding for "Musica Subtilior" was provided by Open Source Programs Office (OSPO), Carnegie Mellon University Libraries.
Event Scores and AI: Aleatory Systems of Sound and Image

"String Music" (score), Benjamin Patterson, 1960.
Cash (Melissa) Ragona, Associate Professor, School of Art
Dom Jebbia, AI Infrastructure Resident and Project Manager, Carnegie Mellon University Libraries
Bo Powers, AI Architect, Computing Services
Peggy Ahwesh, collaborating filmmaker and video artist
Nico Zevallos, Artist-in-residence, Robotics Institute
This project explores the intersections of AI and art, drawing from Fluxus artists Paul Sharits, Benjamin Patterson and Mieko Shiomi. Their use of rules, chance and audience participation in the 1960s–70s laid the groundwork for contemporary AI-driven compositional methods. We aim to investigate how AI can function as a tool for excavation and innovation in sound, image and performance by engaging with historical and contemporary algorithmic approaches. Our research integrates archival study with AI experimentation. We will examine early AI models, such as hidden Markov models, alongside advanced vision-language models to test their interpretative limits when applied to Fluxus methodologies. Through this, we seek to explore authorship, structure and meaning in AI-generated compositions.The project team, spanning AI research, robotics, film and art history, will produce AI-assisted visual and sound scores inspired by Fluxus practices. Future goals include an exhibition featuring these AI-generated scores, interactive audience engagement and live AI-driven performances.
Musica Subtilior — interpreting and sounding graphic musical scores using GenAI

"Musica Subtilior," 2025.
Annie Hui-Hsin Hsieh, Associate Teaching Professor, School of Music
Chris Donahue, Dannenberg Assistant Professor, Computer Science
Irmak Bukey, PhD Student, Computer Science
"Musica Subtilior" is a project that aims to use generative AI to help unpack the complex process of interpreting graphical musical scores. The individuality of each interpretation of any given graphic score highly differs from performer to performer, and this opened doors to inquiries into the creative process itself, in particular, how improvisation and indeterminacy could lead to a wealth of new musical possibilities beyond the traditional, fixed, prescriptive type of Western music notation.
Current AI Models in music are primarily trained on Western classical music, making them highly adaptable to interpret non-traditional graphical scores. For "Musica Subtilior," the models will be trained on different performances and interpretations of a given set of graphic scores to pave the way for identifying patterns in which attributes of graphic scores — shape, color and spatial formatting — are musically interpreted. This information would provide a valuable understanding of emotive and other innate responses in human musical creativity and serve as a case study looking at the biases of models towards Western notation.
Blind Contour

"On the Dances of Palucca," Wassily Kandinsky, 1926.
Johannes DeYoung, Associate Professor of Art, School of Art
Jean Oh, Associate Research Professor, Robotics Institute
Tomo Sone, Choreographer, Dancer, and Artist, AI HOKUSAI ArtTech Research Project; external collaborator
"Blind Contour" advances an exploration of neural choreographic idiolects, the hidden languages of bodily movement and human interaction. Drawing upon multimodal techniques in machine learning, this research explores relationships between choreographic gesture and generative drawing, as well as prospects for generating new choreographies using natural text and speech prompts. This project represents a collaboration between artists and researchers at Carnegie Mellon University and AI HOKUSAI ArtTech Research Project.