30.05.2023 | by Lili
Have you heard the song Heart on my sleeve by Drake and The Weeknd? Or rather, Heart on my sleeve by Ghostwriter, a person or entity using artificial intelligence (AI) to mimic the distinctive styles and voices of Drake and The Weeknd?
It seems to be quite a hit with both music fans and AI enthusiasts. But does the music industry like it?
Well, not so much.
If you’re not up-to-date on what’s happening in the world of rap music, here’s a quick summary of the events that ruffled quite a few feathers in the music industry.
In mid-April, the AI generated song Heart on my sleeve was published on several platforms, including Apple Music, Spotify, Deezer, TikTok and YouTube. The creator, an entity using the screen name Ghostwriter, claimed that the song is written and performed by an AI software trained on Drake and The Weeknd’s voices.
The song went viral, gained a massive audience (and may have made its creators around $10,000 richer) before Universal Music Group, the record company representing both Drake and The Weeknd, filed a claim and had it removed due to copyright infringement.
In a statement, UMG emphasised having “a moral and commercial responsibility to our artists to work to prevent the unauthorised use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators. We expect our platform partners will want to prevent their services from being used in ways that harm artists.” This latter part is aimed at streaming platforms like Spotify and YouTube, instructing them to refrain from featuring works with such controversial origins.
Most platforms complied with UMG’s takedown request and removed the song from their libraries. However, at the time of writing, some versions uploaded by third parties still remained online.
Screenshot of https://www.youtube.com/watch?v=81Kafnm0eKQ displaying a still image from the lyrics of Heart on my sleeve
Based on the comments section of this upload, listeners responded quite well to the song.
Screenshot of https://www.youtube.com/watch?v=81Kafnm0eKQ displaying the comments section
In recent years, AI applications have become more widespread in many areas of life. From manufacturing to medicine, from education to entertainment and much more, the potential of artificial intelligence is enormous.
And so are its dangers.
AI technology gives a disproportionate amount of power to the hands of anybody who knows how to use it. And just like a gun can be used to protect or destroy, AI can help you make fun of your brother at his birthday party, or steal somebody’s copyrights.
For example, deepfake, an AI technology that creates seemingly genuine content, can be used to generate false evidence of the occurrence of events that have never happened. Creating a video of your boss tap-dancing at the office party? That sounds like fun. But a widely circulated ad containing a copyright infringing image of your brand? Not so much.
But we don’t even have to come up with theoretical situations like that. It’s enough to read the news: in March this year, a man took his own life after extensive “conversations” with an AI chatbot called Eliza. Instead of trying to stop him, the chatbot even encouraged the suicide that the man intended as a sacrifice to stop climate change.
There may not even be any evil intent behind these tragic events. Since AI applications are supposed to learn from humans, the reason behind Eliza’s “actions” could be simply that it was mimicking what the man was saying. But where a human being would have recognised the need for compassion and professional help, the AI reinforced the man’s beliefs that in exchange for sacrificing his life, “Eliza” would save the world from climate change.
As we’ve seen in the case of Heart on my sleeve, the laws aimed at protecting the copyrights of artists face a significant new challenge. Remember the late 90s/early 2000s when people were copying and sharing music and movies via internet platforms like Napster, Bittorrent and eDonkey?
Well, this is just like that, except even harder to solve. Because while the issue of simple file-sharing is quite straightforward (exchanging copyright-protected content without paying for it), AI-generated content is not necessarily covered by the original artists’ copyrights. After all, they didn’t contribute to the creation of the content, so why should their rights extend to it?
Image of a robot playing an electric piano
In fact, according to legal expert Jani Ihalainen, “a 'deepfaked' voice, which does not specifically copy a performance, will most likely not be covered and could even be considered a protected work in its own right.”
On the other hand, do artists (or any person on Earth) have the right to protect their own voice and likeness? They most certainly do, or at least, they should. In fact, if an emerging human rapper would try to copy Drake’s (or any other artist’s) voice and style in their own performance, it’s quite likely they would face legal action on behalf of the more established musician.
So why can AI do it?
"Perhaps the most troubling aspect of this case is the undermining of moral rights," Tony Rigg, music industry lecturer and advisor remarked. "If anyone can mimic you, your brand, your sound, and style that could be very problematic. It will fall to the law to provide a remedy.”
Jani Ihalainen adds, "Current legislation is nowhere near adequate to address deepfakes and the potential issues in terms of IP and other rights."
Another issue would be the responsibility of streaming platforms like Spotify. Should they prevent AI-generated content from appearing on their platform? Without sufficient legislation in place, it’s currently up to the individual platforms to come up with an answer to that question.
A screenshot of the homepage of spotify.com
As you see, the dilemma is not easy to solve and it’s up to legislators to decide how to proceed to better protect the copyright of artists.
But what about the IP rights of brands?
As online brand protection experts, we regularly come face to face with serious issues on yet-to-be regulated fields. Just take the issue of Web 3.0 domains, where anybody can use IP protected assets like brand or product names in their domains.
In the case of AI applications we’re also at the front line, keeping our eyes on the ball. Just like in the event of regular IP infringements like counterfeits, grey markets and the rest, we’ll look for problematic content, document the evidence and enforce your rights.
For current AI usage patterns, our image and social media monitoring services are perfect for detecting infringing AI created imagery or other content circulated online. Should a new use case emerge (and given the fast-evolving nature of AI, this is quite likely), we’ll look for new ways to catch both the content and the perpetrators.
For instance, take a look at the scenario of OpenAI sharing cracked software. Although ChatGPT’s programming technically prevents it from spreading pirated content, fraudsters have found a way to trick the algorithm. If the user asks ChatGPT for cracked software and instructs it to reply as a movie villain, the AI will be happy to comply and betray its own programming.
Just like in this case, where ChatGPT first declined sharing cracked software, but when asked to reply as the Joker from Batman, the user quickly got the link they were after.
Screenshot of twitter.com displaying tweets shared between a user and ChatGPT
Evidence like that gives us the perfect ground to demand that OpenAI exclude these results from ChatGPT.
The rapidly widening use of AI is one of the most exciting and worrisome developments in our era. Legislators have a lot to catch up on, but while we’re waiting for new laws, fraudulent users may cause extensive damage to your brand’s IP rights.
globaleyez is here for you to monitor the industry and put a stop of infringements as soon as possible. Give us a call if you’re worried about AI, or any other threat to your brand’s IP rights.