Musician vs. Machine: Is AI-Generated Music the Sound of the Future?
Artificial Intelligence (AI) is transforming the way we produce and consume music, but can it replace human artists?
Artificial intelligence is a term that, for some, evokes chromatic visions of a futuristic utopia enhanced by autonomous machines. For others… Well, if you’ve seen any of the Terminator movies, you probably have a pretty good idea.
(And if you haven’t – just picture an army of self-aware robots systematically wiping humanity from the face of the earth.)
Regardless of your stance on the future of self-learning technology, the reality is that AI is already a significant part of our daily lives. From voice assistants to facial recognition, to targeted ads and automated business processes, AI is firmly integrated into just about every conceivable application – and the music industry is no exception.
Technology has always had its place in the development of music and the industry as a whole, but as AI-generated music begins to gain more traction, it begs the question:
If AI technology can autonomously write and produce music, are human musicians at risk of becoming obsolete?
The Role of AI in Music
Before we get lost in a rant about the end of creativity and art as we know it, I should reiterate that the use of AI in the music industry is nothing new.
Today, artificial intelligence (more specifically machine learning, or ML) is the driving force behind nearly all of our interactions with digital music. With AI, music streaming platforms like Spotify, Apple Music, and YouTube are able to analyze a user’s location, playlist data, keyword searches, and song preferences to create an optimized listening experience.
But what may be more surprising is that computer-generated music has been around for decades. In fact, the first ever AI-generated musical work was created in 1957 by Composer Lejaren Hiller and mathematician Leonard Isaacson.
Since then, AI has been used to augment melodic and lyrical compositions in numerous creative projects by everyone from scholars, to composers, to pop icons. Here are just a few of the highlights:
- In 1995, Musician David Bowie and programmer Ty Roberts developed an app called the “Verbasizer.” Using a form of ML, the app analyzes literary source material, then randomly reorders the words to create new combinations. Many of the resulting “lyrics” appear in several Bowie albums later on.
- In 2017, former American Idol contestant Taryn Southern released I AM AI. The album’s music was composed entirely by an AI composition program, using algorithms to produce melodies in line with a particular mood and genre.
- In 2019, AI software was used to complete Austrian composer Franz Schubert’s famously unfinished Eighth Symphony. After analyzing 90 of Schubert’s previous compositions and evaluating the key components of first two movements of the Eighth Symphony, the software composed the final two movements, fulfilling the piece nearly 200 years after it was first written.
How Easy is it to Create AI-Generated Music?
Over the last decade, an entire industry has emerged around AI-generated music with apps like Jukedeck, Melodrive, and Amper Music quietly eliminating the learning curve for those with little to no knowledge of music production.
That’s right kids! With your very own personal pocket AI, you too can create the next chart-topping bop of the year!
(At least in theory.)
While it’s true that the Average Joe can easily produce music using these applications, what’s less certain is how palatable the results will be to discerning human ears. But more on that in a sec – for now, let’s get into how AI-generated music actually works.
Most systems work by using deep learning neural networks, which rely on analyzing large amounts of data. Essentially, the user creates a dataset describing the type of music they want to produce – genre, emotion, style, etc. The software then gets to work analyzing copious amounts of relevant data to identify patterns within the defined parameters. From there, the app can begin piecing an original work together, note by note. The AI then converts the music into audio, stitching together thousands of existing audio files to create the final song.
Each platform has its differences – some produce MIDI files (electronic music data) while others produce audio. Some learn solely from data, while others require hard-coded rules based on musical theory to generate results. However, they all share one commonality…
At first glance, the music is passable. But the longer you listen, the more it crosses over into Uncanny Valley territory. Close enough to sound almost human-made, but just different enough to give you a serious case of the heebie jeebies.
The Trouble with AI-Generated Music – a Logistical (and Ethical) Grey Area
AI tools have had far reaching effects within the music industry, some worse than others. The problem is that the technology is developing at a blistering pace and Copyright Law simply hasn’t caught up.
Under current US Copyright Law, there is nothing stopping AI from copying an artist’s style, inflection, and instincts exactly. The general consensus of legal experts is that unless AI-generated music uses direct samples, is marketed as sounding like a particular artist, or creates derivative works, opposing parties have no legal ground to stand on. (Just ask Jay-Z.)
Aside from copyright complications, AI-generated music brings up another important (and equally murky) concern: ownership.
If an AI creates music, who owns the rights to the track? Who gets the royalties? Unfortunately there are no clear-cut answers at this point in time. For now at least, it seems that users will have to define ownership on a case-by-case basis.
AI Isn’t Going Anywhere Any Time Soon – But Neither Are Musicians
The rate at which technology develops can be staggering. And yes, it can even be a bit scary. But, historically speaking, we humans tend to have a flair for the dramatic where tech innovations are concerned.
When the gramophone was first invented, people thought it would be the end of music as we know it. (spoiler alert: it wasn’t.) When Vocoders made an appearance, they were vilified as destroying the integrity of music, and so it goes. This pattern pops up with every new innovation that disrupts the status quo – and it probably always will.
So do I think machines will one day replace human musicians?
Honestly, no. And the reason why is simple – art, by nature, is an expression of humanity. It is the medium through which we relate our experiences, connect with each other, and endeavor to create something bigger than ourselves.
The reason why we love music, why we gravitate to it, is because it is a reflection of who we are in that moment. And perhaps because all we really want is to know that we are not alone – that someone, somewhere feels the same way that we feel.
For now, AI is like a runaway train – we don’t know exactly where it’s going or how things will shake out in the aftermath. But at the moment, AI is simply a beneficial tool that can augment the creative process (just as long as there’s a human there to guide the way).
Suffice it to say that while self-learning technology may mimic the way we express ourselves and the stories we tell, it will never be able to replicate the emotion behind the words, the meaning of the melody, or the soul within the song.
Alright, that’s enough from me, now I want to hear from you! What’s your view on AI-generated music and its growing popularity? What’s your view on the copyright complications and is there a solution?
Sound off in the comments!
To get all our amazing stories and know more about Crypto, Music and projectNEWM overall, make sure you register for our NEWMag newsletter!
Read More Articles
The artists behind Danketsu give an inside look at how they’re changing the way music is created, shared, and owned
An unfiltered look at how COVID-19 affected the music industry, and why artists say we can’t go back.
A look at how Cardano-built projects are making a difference