Sir Lucian Grainge Just Drew the Brightest Line Yet on AI

by Chris Castle

Universal Music Group’s CEO Sir Lucian Grainge has put the industry on notice in an internal memo to Universal employees: UMG will not license any AI model that uses an artist’s voice—or generates new songs incorporating an artist’s existing songs—without that artist’s consent. This isn’t just a slogan; it’s a licensing policy, an advocacy position, and a deal-making leverage all rolled into one. After the Sora 2 disaster, I have to believe that OpenAI is at the top of the list.

Here’s the memo:

Dear Colleagues,

I am writing today to update you on the progress that we are making on our efforts to take advantage of the developing commercial opportunities presented by Gen AI technology for the benefit of all our artists and songwriters.

I want to address three specific topics:

Responsible Gen AI company and product agreements; How our artists can participate; and What we are doing to encourage responsible AI public policies.

UMG is playing a pioneering role in fostering AI’s enormous potential. While our progress is significant, the speed at which this technology is developing makes it important that you are all continually updated on our efforts and well-versed on the strategy and approach.

The foundation of what we’re doing is the belief that together, we can foster a healthy commercial AI ecosystem in which artists, songwriters, music companies and technology companies can all flourish together.

NEW AGREEMENTS

To explore the varied opportunities and determine the best approaches, we have been working with AI developers to put their ideas to the test. In fact, we were the first company to enter into AI-related agreements with companies ranging from major platforms such as YouTube, TikTok and Meta to emerging entrepreneurs such as BandLab, Soundlabs, and more. Both creatively and commercially our portfolio of AI partnerships continues to expand.

Very recently, Universal Music Japan announced an agreement with KDDI, a leading Japanese telecommunications company, to develop new music experiences for fans and artists using Gen AI. And we are very actively engaged with nearly a dozen different companies on significant new products and service plans that hold promise for a dramatic expansion of the AI music landscape. Further, we’re seeing other related advancements. While just scratching the surface of AI’s enormous potential, Spotify’s recent integration with ChatGPT offers a pathway to move fluidly from query and discovery to enjoyment of music—and all within a monetized ecosystem.

HOW OUR ARTISTS CAN PARTICIPATE

Based on what we’ve done with our AI partners to date, and the new discussions that are underway, we can unequivocally say that AI has the potential to deliver creative tools that will enable us to connect our artists with their fans in new ways—and with advanced capability on a scale we’ve never encountered.

Further, I believe that Agentic AI, which dynamically employs complex reasoning and adaptation, has the potential to revolutionize how fans interact with and discover music.

I know that we will successfully navigate as well as seize these opportunities and that these new products could constitute a significant source of new future revenue for artists and songwriters.

We will be actively engaged in discussing all of these developments with the entire creative community.

While some of the biggest opportunities will require further exploration, we are excited by the compelling AI models we’re seeing emerge.

We will only consider advancing AI products based on models that are trained responsibly. That is why we have entered into agreements with AI developers such as ProRata and KLAY, among others, and are in discussions with numerous additional like-minded companies whose products provide accurate attribution and tools which empower and compensate artists—products that both protect music and enhance its monetization.

And to be clear—and this is very important—we will NOT license any model that uses an artist’s voice or generates new songs which incorporate an artist’s existing songs without their consent.

New AI products will be joined by many other similar ones that will soon be coming to market, and we have established teams throughout UMG that will be working with artists and their representatives to bring these opportunities directly to them.

RESPONSIBLE PUBLIC POLICIES COVERING AI

We remain acutely aware of the fact that large and powerful AI companies are pressuring governments around the world to legitimize the training of AI technology on copyrighted material without owner consent or compensation, among other proposals.

To be clear: all these misguided proposals amount to nothing more than the unauthorized (and, we believe, illegal) exploitation of the rights and property of creative artists.

In addition, we are acting in the marketplace to see our partners embrace responsible and ethical AI policies and we’re proud of the progress being made there. For example, having accurately predicted the rapid rise of AI “slop” on streaming platforms, in 2023 we introduced Artist-Centric principles to combat what is essentially platform pollution. Since then, many of our platform partners have made significant progress in putting in place measures to address the diversion of royalties, infringement and fraud—all to the benefit of the entire music ecosystem.

We commend our partners for taking action to address this urgent issue, consistent with our Artist-Centric approach. Further, we recently announced an agreement with SoundPatrol, a new company led by Stanford scientists that employs patented technology to protect artists’ work from unauthorized use in AI music generators.

We are confident that by displaying our willingness as a community to embrace those commercial AI models which value and enhance human artistry, we are demonstrating that market-based solutions promoting innovation are the answer.

LEADING THE WAY FORWARD

So, as we work to assure safeguards for artists, we will help lead the way forward, which is why we are exploring and finding innovative ways to use this revolutionary technology to create new commercial opportunities for artists and songwriters while simultaneously aiding and protecting human creativity.

I’m very excited about the products we’re seeing and what the future holds. I will update you all further on our progress.

Lucian

Mr. Grainge’s position reframes the conversation from “Can we scrape?” to How do we get consent and compensate? That shift matters because AI that clones voices or reconstitutes catalog works is not a neutral utility—it’s a market participant competing with human creators and the rights they rely on.

If everything is “transformative” then nothing is protected—and that guts not just copyright, but artists’ name–image–likeness (NIL), right of publicity and in some jurisdictions, moral rights. A scrape-first, justify-later posture erases ownership, antagonizes creators living and dead, and makes catalogs unpriceable. Why would Universal—or any other rightsholder—partner with a company that treats works and identity as free training fuel? What’s great about Lucian’s statement is he’s putting a flag in the ground: the industry leader will not do business with bad actors, regardless of the consequences.

What This Means in Practice

  1. Consent as the gate. Voice clones and “new songs” derived from existing songs require affirmative artist approval—full stop.
  2. Provenance as the standard. AI firms that want first-party deals must prove lawful ingestion, audited datasets, and enforceable guardrails against impersonation.
  3. Aligned incentives. Where consent exists, there’s room for discovery tools, creator utilities, and new revenue streams; where it doesn’t, there’s no deal.

Watermarks and “AI-generated” labels don’t cure false endorsement, right-of-publicity violations, or market substitution. Platforms that design, market, or profit from celebrity emulation without consent aren’t innovating—they’re externalizing legal and ethical risk onto artists.

Moral Rights: Why This Resonates Globally

Universal’s consent-first stance will resonate in moral-rights jurisdictions where authors and performers hold inalienable rights of attribution and integrity (e.g., France’s droit moral, Germany’s Urheberpersönlichkeitsrecht). AI voice clones and “sound-alike” outputs can misattribute authorship, distort a creator’s artistic identity, or subject their work to derogatory treatment—classic moral-rights harms. Because many countries recognize post-mortem moral rights and performers’ neighboring rights, the “no consent, no license” rule is not just good governance—it’s internationally compatible rights stewardship.

Industry Leadership vs. the “Opt-Out” Mirage

It is absolutely critical that the industry leader actively opposes the absurd “opt-out” gambit and other sleights of hand Big Technocrats are pushing to drive a Mack truck through so-called text-and-data-mining loopholes. Their playbook is simple: legitimize mass training on copyrighted works first, then dare creators to find buried settings or after-the-fact exclusions. That flips property rights on their head and is essentially a retroactive safe harbor,

As Mr. Grainge notes, large AI companies are pressuring governments to bless training on copyrighted material without owner consent or compensation. Those proposals amount to the unauthorized—and unlawful—exploitation of artists’ rights and property. By refusing to play along, Universal isn’t just protecting its catalog; it’s defending the baseline principle that creative labor isn’t scrapable.

Consent or Nothing

Let’s be honest: if AI labs were serious about licensing, we wouldn’t have come one narrow miss away from a U.S. state law AI moratorium triggered by their own overreach. That wasn’t just a safe harbor for copyright infringement, that was a safe harbor for everything from privacy, to consumer protection, to child exploitation, to everything. That’s why it died 99-1 in the Senate, but it was a close run thing,,

And realize, that’s exactly what they want when they are left to their own devices, so to speak. The “opt-out” mirage, the scraping euphemisms, and the rush to codify TDM loopholes all point the same direction—avoid consent and avoid compensation. Universal’s position is the necessary counterweight: consent-first, provenance-audited, revenue-sharing with artists and songwriters (and I would add nonfeatured artists and vocalists) or no deal. Anything less invites regulatory whiplash, a race-to-the-bottom for human creativity, and a permanent breach of trust with artists and their estates.

Reading between the lines, Mr. Grainge has identified AI as both a compelling opportunity and an existential crisis. Let’s see if the others come with him and stare down the bad guys.

And YouTube is monetizing Sora videos

[This post first appeared on Artist Rights Watch]

Hey Budweiser, You Give Beer a Bad Name

In a world where zero royalties becomes a brag, and one second of music is one second too far.

Let me set the stage: Cannes Lions is the annual eurotrash…to coin a phrase…circular self-congratulatory hype fest at which the biggest brands and ad agencies in the world if not the Solar System spend unreal amounts of money telling each other how wonderful they are. Kind of like HITS Magazine goes to Cannes but with a real budget. And of course the world’s biggest ad platform–guess who–has a major presence there among the bling and yachts of the elites tied up in Yachtville by the Sea. And of course they give each other prizes, and long-time readers know how much we love a good prize, Nyan Cat wise.

Enter the King of Swill, the mind-numbingly stupid Budweiser marketing department. Or as they say in Cannes, Le roi de la bibine.

Credit where it’s due: British Bud-hater and our friend Chris Cooke at CMU flagged this jaw-dropper from Cannes Lions, where Budweiser took home the Grand Prix for its “One‑Second Ad” campaign—a series of ultra-short TikTok clips that featured the one second of hooks from iconic songs. The gimmick? Tease the audience just long enough to trigger nostalgia, then let the internet do the rest. The beer is offensive enough to any right-thinking Englishman, but the theft? Ooh la la.

Cannes Clown

Budweiser’s award-winning brag? “Zero ads were skipped. $0 spent on music right$.” Yes, that’s correct–“right$”.

That quote should hang in a museum of creative disinformation.

There’s an old copyright myth known as the “7‑second rule”—the idea that using a short snippet of a song (usually under 7 seconds) doesn’t require a license. It’s pure urban legend. No court has ever upheld such a rule, but it sticks around because music users desperately want it to be true. Budweiser didn’t just flirt with the myth—it took the myth on a date to Short Attention Span Theater, built an ad campaign around it, and walked away with the biggest prize in advertising to the cheers of Googlers everywhere.

When Theft from artists Becomes a Business Model–again

But maybe this kind of stunt shouldn’t come as a surprise. When the richest corporations in commercial history are openly scraping, mimicking, and monetizing millions of copyrighted works to train AI models—without permission and without payment—and so far getting away with it, it sends a signal. A signal that says: “This isn’t theft, it’s innovation.” Yeah, that’s the ticket. Give them a prize.

So of course Budweiser’s corporate brethren start thinking: “Me too.

As Austin songwriter Guy Forsyth wrote in Long Long Time“Americans are freedom-loving people, and nothing says freedom like getting away with it.” That lyric, in this context, resonates like a manifesto for scumbags.

The Immorality of Virality

For artists and the musicians and vocalists who created the value that Budweiser is extracting, the campaign’s success is a masterclass in bad precedent. It’s one thing to misunderstand copyright; it’s another to market that misunderstanding as a feature. When global brands publicly celebrate not paying for music–in Cannes, of all places—the very tone-deaf foundation of their ad’s emotional resonance sends a corrosive signal to the entire creative economy. And, frankly, to fans.

Oops!… I Did It Again, bragged Budweiser, proudly skipping royalties like it’s Free Fallin’, hoping no one notices they’re just Smooth Criminals playing Cheap Thrills with other people’s work. It’s not Without Me—it’s without paying anyone—because apparently Money for Nothing is still the vibe, and The Sound of Silence is what they expect from artists they’ve ghosted.

Because make no mistake: even one second of a recording can be legally actionable particularly when the intentional infringing conspiracy gets a freaking award for doing it. That’s not just law—it’s basic respect, which is kind of the same thing. Which makes Budweiser’s campaign less of a legal grey area and more of a cultural red flag with a bunch of zeros. Meaning the ultimate jury award from a real jury, not a Cannes jury.

This is the immorality of virality: weaponizing cultural shorthand to score branding points, while erasing the very artists who make those moments recognizable. When the applause dies down in Yachtville, what’s left is a case study in how to win by stealing — not creating.

French court orders search firms to block pirate sites | BBC

A court in France has ordered Google, Microsoft and Yahoo to block 16 video-streaming sites from their search results.

The court said the sites broke French intellectual property laws and were “almost entirely dedicated” to streaming content without the owners’ permission.

Google, Microsoft and Yahoo must now take measures to ensure the blocked pages cannot be found in a list of search results.

ISPs, including Orange and Bouygues Telecom, will also have to prevent users from being able to access the sites.

READ THE FULL STORY AT THE BBC:
http://www.bbc.co.uk/news/technology-25185819