YouTube reveals AI music experiments that allow people to make music in other people’s voices and by humming

Tools aim to embrace artificial intelligence without upsetting labels or stealing from musicians

Andrew Griffin
Thursday 16 November 2023 15:45 EST
Comments
T-Pain is one of the singers involved in the ‘Dream Track’ project
T-Pain is one of the singers involved in the ‘Dream Track’ project (Getty Images for iHeartMedia)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

YouTube has revealed a host of new, musical artificial intelligence experiments.

The features let people create musical texts by just writing a short piece of text, instantly and automatically generating music in the style of a number of artists. Users can also hum a simple song into their computer and have it turned into a detailed and rich piece of music.

The new experiments are YouTube’s latest attempt to deal with the possibilities and dangers of AI and music. Numerous companies and artists have voiced fears that artificial intelligence could make it easier to infringe on copyright or produce real-sounding fake songs.

One of the new features is called “Dream Track”, and some creators already have it, with the aim of using it to soundtrack YouTube Shorts. It is intended to quickly produce songs in people’s style.

Users can choose a song in the style of a number of officially-licensed artists: Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain, and Troye Sivan. They can then ask for a particular song, deciding on the tone or themes of the song, and it can then be used in their post on Shorts.

Another is called Music AI Tools, and is aimed to help musicians with their creative process. It came out of YouTube’s Music AI Incubator, a working group of artists, songwriters and producers who are experimenting with the use of artificial intelligence in music.

“It was clear early on that this initial group of participants were intensely curious about AI tools that could push the limits of what they thought possible. They also sought out tools that could bolster their creative process,” YouTube said in an announcement.

“As a result, those early sessions led us to iterate on a set of music AI tools that experiment with those concepts. Imagine being able to more seamlessly turn one’s thoughts and ideas into music; like creating a new guitar riff just by humming it or taking a pop track you are working on and giving it a reggaeton feel.

“We’re developing prospective tools that could bring these possibilities to life and Music AI Incubator participants will be able to test them out later this year.”

The company gave an example of one of those tools, where a producer was able to hum a tune and then have it turned into a track that sounded as if it had been professionally recorded.

The tools are built on Google Deepmind’s Lyria system. The company said that was built specifically for music, overcoming problems such as AI’s difficulties with producing long sequences of sound that keep their continuity and do not break apart.

At the same time, Deepmind said it had been working on a technology called SynthID to combine it with Lyria. That will put an audio watermark into the sound, which humans cannot hear but which can be recognised by tools so that they know the songs have been automatically generated.

“This novel method is unlike anything that exists today, especially in the context of audio,” Deepmind said,.

“The watermark is designed to maintain detectability even when the audio content undergoes many common modifications such as noise additions, MP3 compression, or speeding up and slowing down the track. SynthID can also detect the presence of a watermark throughout a track to help determine if parts of a song were generated by Lyria.”

The announcement comes just days after YouTube announced restrictions on unauthorised AI clones of musicians. Earlier this week it said that users would have to tag AI-generated content that looked realistic, and music that “mimics an artist’s unique singing or rapping voice” will be banned entirely.

Those videos have proven popular in recent months, largely thanks to online tools that allow people to easily combine a voice with an existing song and create something entirely new, such as Homer Simpson singing popular hits. Those will not be affected straight away, with the new requirements rolling out next year.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in