Skip to main content
A woman with long black hair is seated on the right opposite a computer screen with a small piano keyboard and computer keyboard in front of her on a desk, where a man next to her with glasses and wavy black hair operates the mouse and talks to her.
CMU doctoral student Jose Oros, who designed a study to test creativity in AI-generated music, leads Evelyn Davenport, who is working toward an Advanced Music Studies certificate at CMU, in a research session.

As AI-Generated Music Advances, Humans Still Lead in Creativity, CMU Research Finds

Media Inquiries
Name
Cassia Crogan
Title
University Communications & Marketing

AI can write songs, but still has a way to go before matching the creativity of tunes made by people, according to Carnegie Mellon University research. 

An interdisciplinary team examined what's missing when algorithms replace human experience, finding that AI-assisted music was slower, used fewer notes and was judged by listeners as less creative.

Generative AI tools use large language models (LLMs), systems that learn from text and can create things — like stories, answers, or even music — based on user instructions.

“We’re trying to understand how these tools shape music and if they can support creative ideation when composing songs,” said Jose Oros, who is working toward a doctorate degree in information systems from the Heinz College of Information Systems and Public Policy(opens in new window).

Oros worked with Rahul Telang(opens in new window), Trustees Professor of Information Systems at Heinz College, and Richard Randall(opens in new window), associate professor of music theory in the School of Music(opens in new window) in the College of Fine Arts(opens in new window)

Oros initiated a study where 140 musically trained participants created a 15-second melody with a small piano keyboard. Randomly selected participants were given access to a generative-AI platform called Udio, which they could use to generate tunes from text prompts for inspiration, while others wrote their own melodies without AI assistance.

Then, the melodies were all judged by another group based on creativity, enjoyment and musicality.

“A lot of studies on the effect of AI focus on productivity, but creativity and novelty are central outcomes that we care about in music and the arts in general,” Oros said. “These tools are being developed with the promise of improving creativity or having a social benefit, so if these tools are not helping, then that has important implications. If these tools are helping, then we may want to enable their development.”

A woman's hand, reaching over a computer keyboard, plays a small piano keyboard.

How is AI already being incorporated into music?

Music generated by AI has already been making headlines. An R&B avatar, Xania Monet, earned enough airplay to debut on a Billboard radio chart(opens in new window). AI-generated indie rock band Velvet Sundown released two albums and earned more than 1 million streams(opens in new window) on Spotify. AI-generated country acts Cain Walker and Breaking Rust have topped digital sales charts(opens in new window).

However, singer-songwriter Teddy Swims recently discussed(opens in new window) his use of AI tools, touting how they can save artists time, and how he considered reworking a song generated with AI to sound like him. Similarly, Harvey Mason Jr., CEO of the Recording Academy, which hosts the Grammy Awards, recently said(opens in new window) nominations for music created with AI would be eligible for the honors, with consideration in the right categories.

At Carnegie Mellon, researchers have developed a new kind of text-to-music artificial intelligence interface(opens in new window), helped expand music generation with Adobe(opens in new window) and examined the role of AI in copyright policy(opens in new window).

A man wearing a dark blue collared shirt who has brown hair and a brown beard and moustache smiles and looks at the camera

Donahue

Chris Donahue(opens in new window), assistant professor in the Computer Science Department(opens in new window) in the School of Computer Science(opens in new window) at CMU, discussed his work on a recent episode of the SCS podcast “Does Compute.”(opens in new window) 

Donahue, with a background in computer science and music, leads the Generative Creativity Lab(opens in new window) and has developed tools such as PianoGenie used in a collaboration(opens in new window) with Google AI and rock band The Flaming Lips.

He oversaw the creation of Amuse(opens in new window), a tool meant to allow people to collaborate with AI to write songs. The platform allows someone to upload images, text or audio and transform it into musical chord progressions.

Donahue said he hopes his work allows for more innovation in both fields. 

“I do believe at some point AI systems will be able to generate a waveform that evokes the same kinds of captivating interests that human-created music currently does,” he said on the podcast. “Ultimately, it’s still human intentionality driving those systems that is going to continue to be the focus of the foreground of the human music experience.”

What makes music human?

Man with dark gray hair and a white goatee beard looks at the camera in a headshot that shows he is wearing a light blue button-up collared shirt.

Randall

In the future, as with other technological advances such as the radio, vinyl records or digital streaming, Randall said AI will affect how and when listeners engage with music. 

“Humans create music out of their own personal experiences and inspirations, and that resonates with some people, creating a relationship between the music, the artist and the listener,” he said.

Randall leads the Music Experience Lab(opens in new window) at Carnegie Mellon, which seeks to understand the essential role music plays in our lives. His research is multidisciplinary and investigates not only how music is produced and performed, but also how it is understood by listeners.

“Music is a verb. It’s not a noun. It’s not a thing sitting on a table. It’s something we do and we can express this ‘doing’ in a lot of different ways, and music takes a lot of different activities to make it happen,” Randall said. “It's all part of the musical ecosystem.”

How could AI become a music industry disruptor?

Yet, tools such as Udio, capable of producing radio-ready tracks from just a few words, increase overall accessibility to creating music. Want an ’80s-style rock anthem about monkeys on the moon? Just type it(opens in new window) in. 

“Generative AI is so evident as a disruptor in music,” Oros said. “It lowers the bar for people with low musical knowledge to get into creating music.” 

When it comes to where the LLM draws from to create those tunes, Telang said these music-generation platforms have ethical questions to consider about copyright infringement and compensation similar to those surrounding how OpenAI’s ChatGPT trained its LLM(opens in new window).

A man with black hair wearing glasses looks at the camera for a headshot, including the lapels of a brown suit jacket, light blue shirt and patterned matching tie.

Telang

“How do they create the tunes? They have to train their model on some sort of corpus,” he said. “The accusation — and I’m using the word ‘accusation’ without being specific — by the music, book and movie publishers is that these platforms are training the data on copyrighted content without compensation.”

Oros, who in the past interned with both Spotify and Pandora, pointed out that AI companies claim that the LLM is only doing what a human is doing: integrating a lot of knowledge and generating something new from it.

What could happen if AI-generated songs — cheap to produce and easy to distribute — flood the market?

“What happens to the human creators?” Oros said. “In this way, this new technology is having a large impact on both the production and consumption of music.”

Randall said some consumers are trying to combat this by spending more time seeing live music, however, ticket prices have been rising in a way that discourages others.

“Live performance is the ground truth of the music process, largely everything recorded is a reflection of that,” he said. “We really have to make it clear what’s at stake for people to understand that live music is important.”

Could inspiration from AI create better music?

The researchers agreed this is only the start of examining these topics that could eventually inform policymakers on how to tackle these questions. And the collaborative, interdisciplinary approach at Carnegie Mellon makes it the perfect place to help answer them.

A woman with long black hair wearing headphones and a fuschia sweater plays a small piano keyboard attached to a computer as part of a music research study session.

Evelyn Davenport, working toward an Advanced Music Studies certificate at CMU, uses the piano keyboard to create a tune during a research session.

“There isn’t any knowledge about this yet,” Randall said. “We don’t know how AI is affecting human creativity, especially musical creativity, so this is a watershed paper that is very granular, good, basic research beginning a mode of inquiry I hope will continue.”

Oros presented the work as a poster at the Conference on Digital Experimentation at the Massachusetts Institute of Technology in November and will defend his thesis in May.

Future research could more closely examine the demand for and perceptions of AI-generated music, such as if it changes people’s enjoyment of the music. For now, though, Telang said using these music-generating platforms for inspiration could be more beneficial than considering them for overall creation.

“All I’m doing is giving it an idea to explore or experiment with,” he said. “This ideation then allows the human being to create music that is better, or that is beneficial to society.”

Music and technology will continue to evolve, but the inherent ingenuity and innovation necessary to create it will endure, Randall said.

“I don’t think there’s a limit to human creativity,” he said. “The ways humans shape pitches, not just how they combine them, but how they shape the sounds of them, how they organize them in time, the rhythmic pullbacks, delays and pushes, that someone adds — it’s not formulaic.”

On the other hand, there is a limit to AI-generated music.

“It’s always going to be derivative in some way, it’s always going to be playing it safe,” he said. “Humans are not constrained by that.”

— Related Content —