Japan Today

Here
and
Now

opinions

AI can now generate entire songs on demand. What does this mean for music as we know it?

3 Comments
By Oliver Bown
Image: iStock/natrot

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor – Udio – arrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

After playing with Suno and Udio, I’ve been thinking about what it is exactly they change – and what they might mean not only for the way professionals and amateur artists create music, but the way all of us consume it.

Expressing emotion without feeling it

Generating audio from text prompts in itself is nothing new. However, Suno and Udio have made an obvious development: from a simple text prompt, they generate song lyrics (using a ChatGPT-like text generator), feed them into a generative voice model, and integrate the “vocals” with generated music to produce a coherent song segment.

This integration is a small but remarkable feat. The systems are very good at making up coherent songs that sound expressively “sung” (there I go anthropomorphising).

The effect can be uncanny. I know it’s AI, but the voice can still cut through with emotional impact. When the music performs a perfectly executed end-of-bar pirouette into a new section, my brain gets some of those little sparks of pattern-processing joy that I might get listening to a great band.

To me this highlights something sometimes missed about musical expression: AI doesn’t need to experience emotions and life events to successfully express them in music that resonates with people.

Music as an everyday language

Like other generative AI products, Suno and Udio were trained on vast amounts of existing work by real humans – and there is much debate about those humans’ intellectual property rights.

Nevertheless, these tools may mark the dawn of mainstream AI music culture. They offer new forms of musical engagement that people will just want to use, to explore, to play with and actually listen to for their own enjoyment.

AI capable of “end to end” music creation is arguably not technology for makers of music, but for consumers of music. For now it remains unclear whether users of Udio and Suno are creators or consumers – or whether the distinction is even useful.

A long-observed phenomenon in creative technologies is that as something becomes easier and cheaper to produce, it is used for more casual expression. As a result, the medium goes from an exclusive high art form to more of an everyday language – think what smartphones have done to photography.

So imagine you could send your father a professionally produced song all about him for his birthday, with minimal cost and effort, in a style of his preference – a modern-day birthday card. Researchers have long considered this eventuality, and now we can do it. Happy birthday, dad.

Can you create without control?

Whatever these systems have achieved and may achieve in the near future, they face a glaring limitation: the lack of control.

Text prompts are often not much good as precise instructions, especially in music. So these tools are fit for blind search – a kind of wandering through the space of possibilities – but not for accurate control. (That’s not to diminish their value. Blind search can be a powerful creative force.)

Viewing these tools as a practising music producer, things look very different. Although Udio’s about page says “anyone with a tune, some lyrics, or a funny idea can now express themselves in music”, I don’t feel I have enough control to express myself with these tools.

I can see them being useful to seed raw materials for manipulation, much like samples and field recordings. But when I’m seeking to express myself, I need control.

Using Suno, I had some fun finding the most gnarly dark techno grooves I could get out of it. The result was something I would absolutely use in a track.

But I found I could also just gladly listen. I felt no compulsion to add anything or manipulate the result to add my mark.

And many jurisdictions have declared that you won’t be awarded copyright for something just because you prompted it into existence with AI.

For a start, the output depends just as much on everything that went into the AI – including the creative work of millions of other artists. Arguably, you didn’t do the work of creation. You simply requested it.

New musical experiences in the no-man’s land between production and consumption

So Udio’s declaration that anyone can express themselves in music is an interesting provocation. The people who use tools like Suno and Udio may be considered more consumers of music AI experiences than creators of music AI works, or as with many technological impacts, we may need to come up with new concepts for what they’re doing.

A shift to generative music may draw attention away from current forms of musical culture, just as the era of recorded music saw the diminishing (but not death) of orchestral music, which was once the only way to hear complex, timbrally rich and loud music. If engagement in these new types of music culture and exchange explodes, we may see reduced engagement in the traditional music consumption of artists, bands, radio and playlists.

While it is too early to tell what the impact will be, we should be attentive. The effort to defend existing creators’ intellectual property protections, a significant moral rights issue, is part of this equation.

But even if it succeeds I believe it won’t fundamentally address this potentially explosive shift in culture, and claims that such music might be inferior also have had little effect in halting cultural change historically, as with techno or even jazz, long ago. Government AI policies may need to look beyond these issues to understand how music works socially and to ensure that our musical cultures are vibrant, sustainable, enriching and meaningful for both individuals and communities.

Oliver Bown **a an associate professor at the School of Art & Design, University of New South Wales, where he is also co-director of the Interactive Media Lab, and co-director of Research and Engagement**.

The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.

© The Conversation

©2024 GPlusMedia Inc.

3 Comments
Login to comment

Fascinating stuff. I have assumed that the authenticity of the creator of the music is important in our appreciation of it. Perhaps, if we can imagine an AI-generated piece as a distillation or concentration of the art of a number of authentic creators, it might still be appreciated. On the other hand, there is a strong likelihood that we will diverge more and more from any authenticity as the machine homes in on the same commercial imperatives as most music producers, particularly as musical literacy diminishes. At the end of the day, it is about feelings and though this author gets "little sparks of pattern-processing joy", it is not clear that many people do. For many, it is just a background sound rather than an exploration of feelings.

0 ( +1 / -1 )

There's an AI lady who's sure

All that glitters is silicon...

1 ( +1 / -0 )

I'm not a musician and I can't even play an instrument. Suno has given me the opportunity to "create" music for myself and others. It's incredible what it can already do when one puts the time in and doesn't just click once and take the first result. It can sing in many languages including the non-existent "Sindarin". I've spent dozens of hours on it. I think in conjunction with other AI tools, it will eventually change the creation and consumption of music quite a bit. I've generated an instrumental track purely for myself that I listen to regularly and occasionally have it changed from the middle, this is not something the current music industry allows me to do. And the emotions that music can evoke. I have some of "my" best creations on my youtube channel. For example, this four-language song: https://www.youtube.com/watch?v=l61YEctB04I (I also have the one in Elvish/Sindarin there)

0 ( +1 / -1 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites