2026 Music Predictions: The Legal and Policy Fault Lines Ahead

By Chris Castle

I was grateful to Hypebot for publishing my 2026 music‑industry predictions, which focused on the legal and structural pressures already reshaping the business. For regular readers, I’m reposting those predictions here—and adding a few more that follow directly from the policy work, regulatory engagement, and royalty‑system scrutiny we’ve been immersed in over the past year with the Artist Rights Institute. These additional observations are less about trend‑spotting and more about where the underlying legal and institutional logic appears to be heading next.

1. AI Copyright Litigation Will Move From Abstract Theory to Operational Discovery

In 2026, the center of gravity in AI‑copyright cases will shift toward discovery that exposes how models are trained, weighted, filtered, and monetized. Courts will increasingly treat AI systems as commercial products rather than research experiments, and discovery fights for the good of humanity…ahem…rather than summary judgment rhetoric. The result will be pressure on platforms to settle, license, or restructure before full disclosure occurs particularly since it’s becoming increasingly likely that every frontier AI lab as ripped off the world’s culture the old fashioned way—they stole it off the Internet.

The next round of AI copyright litigation will come from fans: As more deals are done with AI like the Disney/Sora deal, fans who use Sora or other AI to create separatable rights with AI (like new characters, new story lines) or even new universes with old story lines (like maybe new versions of the Luke/Darth/Hans/Leia arc in the Old West) will start to get the idea that their IP is…well…their IP. If it’s used without compensating them or getting their permission, that whole copyright thing is going to start to get real for them.

2. Streaming Platforms Will Face Structural Payola Scrutiny, Not Just Royalty Complaints

Minimum‑payment thresholds, bundled offerings, and “greater‑of” formulas will no longer be treated as isolated business choices. Regulators and courts will begin to examine how these mechanisms function together to shift risk onto artists while preserving platform margins. Antitrust, consumer‑protection, and unfair‑competition theories will increasingly converge around the same conduct. Due to Spotify’s market dominance and intimidation factor for majors and big to medium sized independent labels, these cases will have to come from independent artists.

3. The Copyright Office Will Approve a Conditional Redesignation of the MLC

Rather than granting an unconditional redesignation of the Mechanical Licensing Collective, the Copyright Office is likely to impose conditions tied to governance, transparency, and financial stewardship. This approach allows continuity for licensees while asserting supervisory authority grounded in the statute. The message will be clear: designation is provisional, not permanent.

Digital-Licensing-Coordinator-to-USCO-2-Sept-22-2025Download

4. The MLC’s Gundecked Investment Policy Will Be Unwound or Materially Rewritten

The practice of investing unmatched royalties as a pooled asset is becoming legally and politically indefensible. In 2026, expect the investment policy to be unwound or rewritten by new regulations to require pass‑through of gains, or strict capital‑preservation limits. Once framed as a fiduciary issue rather than a finance strategy, the current model cannot survive intact.

It’s also worth noting that the MLC’s investment portfolio has grown so large ($1.212 billion) that its investment income reported on its 2023 tax return has also grown to an amount in excess of its operating costs as measured by the administrative assessment paid by licensees.

5. An MLC Independent Royalty‑Accounting and Systems Review Will Become Inevitable

As part of a conditional redesignation, the Copyright Office may require an end‑to‑end operational review of the MLC by a top‑tier royalty‑accounting firm. Unlike a SOC report, such a review would examine whether matching, data logic, and distributions actually produce correct outcomes. Once completed, that analysis would shape litigation, policy reform, and future oversight.

6. Foreign CMOs Will Push Toward Licensee‑Pays Models

Outside the U.S., collective management organizations face rising technology costs and political scrutiny over compensation. In response, many will explore shifting more costs to licensees rather than members, reframing CMOs as infrastructure providers. Ironically, the U.S. MLC experiment may accelerate this trend abroad given the MLC’s rich salaries and vast resources for developing poorly implemented tech.

These developments are not speculative in the abstract. They follow from incentives already in motion, records already being built, and institutions increasingly unable to rely on deference alone.

7.  Environmental Harms of AI Become a Core Climate Issue

We will start to see the AI labs normalize the concept of private energy generation on a massive scale to support data centers built in current green spaces.  If they build or buy electric plants they do not intend to share.  This whole thing about they will build small nuclear reactors and sell excess back to the local grid is crazy—there won’t be any excess and what about their behavior over the last 25 years makes you think they’ll share a thing?

So some time after Los Angeles rezones Griffith Park commercial and sells the Greek Theater to Google for a new data center and private nuclear reactor and Facebook buys the Diablo Canyon reactor, the Music Industry Climate Collective will formally integrate AI’s ecological footprint into their national and international policy agendas. After mounting evidence of data‑center water depletion, aquifer stress, and grid destabilization — particularly in drought‑prone regions — climate coalitions will conceptually reclassify AI infrastructure as a high‑impact industrial activity.

This will become acute after people realize they cannot expect the state or federal government to require new state permitting regimes because of the overwhelming political influence of Big Tech in the form of AI Viceroy-for-Life David Sacks. (He’s not going anywhere in a post-Trump era.). This will lead to environmental‑justice litigation over siting decisions and pressure to require reporting of AI‑related energy, water, and land use.

8.  Criminal RICO Case Against StubHub and Affiliated Resale Networks

By late 2026, the Department of Justice brings a landmark criminal RICO indictment targeting StubHub‑linked reseller networks and individual reseller financiers for systemic ticketing fraud and money laundering. The enterprise theory alleges that major resellers, platform intermediaries, lenders, and bot‑operators coordinated to engage in wire fraud, market manipulation, speculative ticketing, and deceptive consumer practices at international scale. Prosecutors present evidence of an organized structure that used bots, fabricated scarcity, misrepresentation of seat availability, and price‑fixing algorithms to inflate profits.

This becomes the first major criminal RICO prosecution in the secondary‑ticketing economy and triggers parallel state‑level investigations and civil RICO suits. Public resellers like StubHub will face shareholder lawsuits and securities fraud allegations.

Just another bright sunshiny day.

[A version of this post first appeared on MusicTechPolicy]




NYT: Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

This image has an empty alt attribute; its file name is image-9.png

The New York Times published a sprawling investigation into David Sacks’s role as Trump’s A.I. and crypto czar. We’ve talked about David Sacks a few times on these pages. The Times’ piece is remarkable in scope and reporting: a venture capitalist inside the White House, steering chip policy, promoting deregulation, raising money for Trump, hosting administration events through his own podcast brand, and retaining hundreds of A.I. and crypto investments that stand to benefit from his policy work.

But for all its detail, the Times buried the lede.

The bigger story isn’t just ethics violations. or outright financial corruption. It’s that Sacks is simultaneously shaping and shielding the largest regulatory power grab in history: the A.I. moratorium and its preemption structure.

Of all the corrupt anecdotes in the New York Times must read article regarding Viceroy and leading Presidential pardon candidate David Sacks, they left out the whole AI moratorium scam, focusing instead on the more garden variety of self-dealing and outright conflicts of interest that are legion. My bet is that Mr. Sacks reeks so badly that it is hard to know what to leave out. Here’s a couple of examples:

This image has an empty alt attribute; its file name is image-10.png

There is a deeper danger that the Times story never addresses: the long-term damage that will outlive David Sacks himself. Even if Sacks eventually faces investigations or prosecution for unrelated financial or securities matters — if he does — the real threat isn’t what happens to him. It’s what happens to the legal architecture he is building right now.

This image has an empty alt attribute; its file name is sacks-american-flag.jpg

If he succeeds in blocking state-law prosecutions and freezing A.I. liability for a decade, the harms won’t stop when he leaves office. They will metastasize.

Without state enforcement, A.I. companies will face no meaningful accountability for:

  • child suicide induced by unregulated synthetic content
  • mass copyright theft embedded into permanent model weights
  • biometric and voiceprint extraction without consent
  • data-center sprawl that overwhelms local water, energy, and zoning systems
  • surveillance architectures exported globally
  • algorithmic harms that cannot be litigated under preempted state laws

These harms don’t sunset when an administration ends. They calcify. It must also be said that Sacks could face state securities-law liability — including fraud, undisclosed self-dealing, and market-manipulative conflicts tied to his A.I. portfolio — because state blue-sky statutes impose duties possibly stricter than federal law. The A.I. moratorium’s preemption would vaporize these claims, shielding exactly the conduct state regulators are best positioned to police. No wonder he’s so committed to sneaking it into federal law.

The moratorium Sacks is pushing would prevent states from acting at the very moment when they are the only entities with the political will and proximity to regulate A.I. on the ground. If he succeeds, the damage will last long after Sacks has left his government role — long after his podcast fades, long after his investment portfolio exits, long after any legal consequences he might face.

The public will be living inside the system he designed.

There is one final point the public needs to understand. DavidSacksis not an anomaly. Sacks is to Trump what Eric Schmidt was to Biden: the industry’s designated emissary, embedded inside the White House to shape federal technology policy from the inside out. Swap the party labels and the personnel change, but the structural function remains the same. Remember, Schmidt bragged about writing the Biden AI executive order.

This image has an empty alt attribute; its file name is of-all-the-ceos-google-interviewed-eric-schmidt-was-the-only-one-that-had-been-to-burning-man-which-was-a-major-plus.jpg

So don’t think that if Sacks is pushed out, investigated, discredited, or even prosecuted one day — if he is — that the problem disappears. You don’t eliminate regulatory capture by removing the latest avatar of it. The next administration will simply install a different billionaire with a different portfolio and the same incentives: protect industry, weaken oversight, preempt the states, and expand the commercial reach of the companies they came in with.

The danger is not David Sacks the individual. The danger is the revolving door that lets tech titans write national A.I. policy while holding the assets that benefit from it. As much as Trump complains of the “deep state,” he’s doing his best to create the deepest of deep states.

Until that underlying structure changes, it won’t matter whether it’s Sacks, Schmidt, Thiel, Musk, Palihapitiya, or the next “technocratic savior.”

The system will keep producing them — and the public will keep paying the price. For as Sophocles taught us, it is not in our power to escape the curse.

It’s Back: The National Defense Authorization Act Is No Place for a Backroom AI Moratorium

David Sacks Is Bringing Back the AI Moratorium

WHAT’S AT STAKE

The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.

Ted Turns Texas Glowing

99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:

It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.

Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.

Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.

The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”

Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.

In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).

This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.

The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.

AI Moratorium Lobbying Explainer for Your Electric Bill

Here are the key shilleries pushing the federal AI moratorium and their backers:

Lobby Shop / OrganizationSupporters / FundersRole in Pushing MoratoriumNotes
INCOMPAS / AI Competition Center (AICC)Amazon, Google, Meta, Microsoft, telecom/cloud companiesLeads push for 10-year state-law preemption; argues moratorium prevents ‘patchwork’ lawsIdentified as central industry driver
Consumer Technology Association (CTA)Big Tech, electronics & platform economy firmsLobbying for federal preemption; opposed aggressive state AI lawsHigh influence with Commerce/Appropriations staff
American Edge ProjectMeta-backed advocacy orgFrames preemption as necessary for U.S. competitiveness vs. China; backed moratoriumUsed as indirect political vehicle for Meta
Abundance InstituteTech investors, deregulatory donorsArgues moratorium necessary for innovation; publicly predicts return of moratoriumMessaging aligns with Silicon Valley VCs
R Street InstituteMarket-oriented donors; tech-aligned fundersOriginated ‘learning period’ moratorium concept in 2024 papers by Adam ThiererNot a lobby shop but provides intellectual framework
Corporate Lobbyists (Amazon/Google/Microsoft/Meta/OpenAI/etc.)Internal lobbying shops + outside firmsPromote ‘uniform national standards’ in Congressional meetingsOperate through and alongside trade groups

PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS

The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.

An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.

THE TIMING COULD NOT BE MORE INSULTING

Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.

And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.

States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.

A CALL TO ACTION

Tell your Members of Congress:
No AI moratorium in the NDAA.
No backroom preemption.
No Big Tech giveaways in the defense budget.

What We Know—and Don’t Know—About Spotify and NMPA’s “Opt-In” Audiovisual Deal

When Spotify and the National Music Publishers’ Association (NMPA) announced an “opt-in” audiovisual licensing portal this month, the headlines made it sound like a breakthrough for independent songwriters. In reality, what we have is a bare-bones description of a direct-license program whose key financial and legal terms remain hidden from view.

Here’s what we do know. The portal (likely an HFA extravaganza) opened on November 11, 2025 and will accept opt-ins through December 19. Participation is limited to NMPA member publishers, and the license covers U.S. audiovisual uses—that is, music videos and other visual elements Spotify is beginning to integrate into its platform. It smacks of the side deal on pending and unmatched tied to frozen mechanicals that the CRB rejected in Phonorecords IV.

Indeed, one explanation for the gun decked opt-in period is in The Desk:

Spotify is preparing to launch music videos in the United States, expanding a feature that has been in beta in nearly 100 international markets since January, the company quietly confirmed this week.

The new feature, rolling out to Spotify subscribers in the next few weeks, will allow streaming audio fans to watch official music videos directly within the Spotify app, setting the streaming platform in more direct competition with YouTube.

The company calls it a way for indies to share in “higher royalties,” but no rates, formulas, or minimum guarantees have been disclosed so it’s hard to know “higher” compared to what? Yes, it’s true that if you evan made another 1¢ that would be “higher”—and in streaming-speak, 1¢ is big progress, but remember that it’s still a positive number to the right of the decimal place preceded by a zero.

The deal sits alongside Spotify’s major-publisher audiovisual agreements, which are widely believed to include large advances and broader protections—none of which apply here. There’s also an open question of whether the majors granted public performance rights as an end run around PROs, which I fully expect. There’s no MFN clause, no public schedule, and no audit details. I would be surprised if Spotify agreed to be audited by an independent publisher and even more surprised if the announced publishers with direct deals did not have an audit right. So there’s one way we can be pretty confident this is not anything like MFN terms aside from the scrupulous avoidance of mentioning the dirty word: MONEY.

But it would be a good guess that Spotify is interested in this arrangement because it fills out some of the most likely plaintiffs to protect them when they launch their product with unlicensed songs or user generated videos and no Content ID clone (which is kind of Schrödinger’s UGC—not expressly included in the deal but not expressly excluded either, and would be competitive with TikTok or Spotify nemesis YouTube).

But here’s what else we don’t know: how much these rights are worth, how royalties will be calculated, whether they include public performances to block PRO licensing of Spotify A/V (and which could trigger MFN problems with YouTube or other UGC services) and whether the December 19 date marks the end of onboarding—or the eve of a US product launch. And perhaps most importantly, how is it that NMPA is involved, the NMPA which has trashed Spotify far and wide over finally taking advantage of the bundling rates negotiated in the CRB (indeed in some version since 2009). Shocked, shocked that there’s bundling going on.

It’s one thing to talk about audiovisual covering “official” music videos and expressly stating that the same license will not be used to cover UGC no way, no how. Given Spotify’s repeated hints that full-length music videos are coming to the U.S. and the test marketing reported by The Desk and disclosed by Spotify itself, the absolute silence of the public statements about royalty rates and UGC, as well as the rush to get publishers to opt in before year-end all suggest that rollout is imminent. Until Spotify and the NMPA release the actual deal terms, though, we’re all flying blind—sheep being herded toward an agreement cliff we can’t fully see.

[A version of this post first appeared on MusicTechPolicy]

There Is No ‘Right to Train’: How AI Labs Are Trying to Manufacture a Safe Harbor for Theft

Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about “transformative fair use.” Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will.  That doesn’t mean they won’t keep trying.

A “right to train” is not found anywhere in the Copyright Act or any other law.  It’s also not found in court cases on fair-use that the AI lobby leans on. It’s a slogan and it’s spin, not a statute. What we’re watching is a coordinated effort by the major AI labs to manufacture a safe harbor through litigation — using every favorable fair-use ruling to carve out what looks like a precedent for blanket immunity.  Then they’ll get one of their shills in Congress or a state legislature to introduce legislation as though a “right to train” was there all along.

How the “Right to Train” Narrative Took Shape

The phrase first appeared in tech-industry briefs and policy papers describing model training as a kind of “machine learning fair use.” The logic goes like this: since humans can read a book and learn from it, a machine should be able to “learn” from the same book without permission.

That analogy collapses under scrutiny. First of all, humans typically bought the book they read or checked it out from a library.  Humans don’t make bit-for-bit copies of everything they read, and they don’t reproduce or monetize those copies at global scale. AI training does exactly that — storing expressive works inside model weights, then re-deploying them to generate derivative material.

But the repetitive chant of the term “right to train” serves a purpose: to normalize the idea that AI companies are entitled to scrape, store, and replicate human creativity without consent. Each time a court finds a narrow fair-use defense in a context that doesn’t involve piracy or derivative outputs (because they lose on training on stolen goods like in the Anthropic and Meta cases), the labs and their shills trumpet it as proof that training itself is categorically protected. It isn’t and no court has ever ruled that it is and likely never will.

Fair Use Is Not a Safe Harbor

Fair use is a case-by-case defense to copyright infringement, not a standing permission slip. It weighs purpose, amount, transformation, and market effect — all of which vary depending on the facts. But AI companies are trying to convert that flexible doctrine into a brand new safe harbor: a default assumption that all training is fair use unless proven otherwise.  They love a safe harbor in Silicon Valley and routinely abuse them like Section 230, the DMCA and Title I of the Music Modernization Act.

That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it.  A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word “training.”

Even if courts were to recognize limited fair use for truly lawful training, that protection would never extend to datasets built from pirate websites, torrent mirrors, or unlicensed repositories like Sci-Hub, Z-Library, or Common Crawl’s scraped paywalls—more on the scummy Common Crawl another time. The DMCA’s safe harbors don’t protect platforms that knowingly host stolen goods — and neither would any hypothetical “right to train.”

Yet a safe harbor is precisely what the labs are seeking: a doctrine that would retroactively bless mass infringement like Spotify got in the Music Modernization Act and preempt accountability for the sources they used.  

And not only do they want a safe harbor — they want it for free.  No licenses, no royalties, no dataset audits, no compensation. What do they want?  FREE STUFF.  When do they want it?  NOW!  Just blanket immunity, subsidized by every artist, author, and journalist whose work they ingested without consent or payment.

The Real Motive Behind the Push

The reason AI companies need a “right to train” is simple: without it, they have no reliable legal basis for the data that powers their models and they are too cheap to pay and to careless to take the time to license. Most of their “training corpora” were built years before any licenses were contemplated — scraped from the open web, archives, and pirate libraries under the assumption that no one would notice.

This is particularly important for books.  Training on books is vital for AI models because books provide structured, high-quality language, complex reasoning, and deep cultural context. They teach models coherence, logic, and creativity that short-form internet text lacks. Without books, AI systems lose depth, nuance, and the ability to understand sustained argument, narrative, and style. 

Without books, AI labs have no business.  That’s why they steal books.  Very simple, really.

Now that creators are suing, the labs are trying to reverse-engineer legitimacy. They want to turn each court ruling that nudges fair use in their direction into a brick in the wall of a judicially-manufactured safe harbor — one that Congress never passed and rights-holders never agreed to and would never agree to.

But safe harbors are meant to protect good-faith intermediaries who act responsibly once notified of infringement. AI labs are not intermediaries; they are direct beneficiaries. Their entire business model depends on retaining the stolen data permanently in model weights that cannot be erased.  The “right to train” is not a right — it’s a rhetorical weapon to make theft sound inevitable and a demand from the richest corporations in commercial history for yet another government-sponsored subsidy of infringement by bad actors.

The Myth of the Inevitable Machine

AI’s defenders claim that training on copyrighted works is as natural as human learning. But there’s nothing natural about hoarding other people’s labor at planetary scale and calling it innovation. The truth is simpler: the “right to train” is a marketing term invented to launder unlawful data practices into respectability.

If courts and lawmakers don’t call it what it is — a manufactured, safe harbor for piracy to benefit some of the biggest free riders who ever snarfed down corporate welfare — then history will repeat itself. What Grokster tried to do with distribution, AI is trying to do with cognition: privatize the world’s creative output and claim immunity for the theft.

Sir Lucian Grainge Just Drew the Brightest Line Yet on AI

by Chris Castle

Universal Music Group’s CEO Sir Lucian Grainge has put the industry on notice in an internal memo to Universal employees: UMG will not license any AI model that uses an artist’s voice—or generates new songs incorporating an artist’s existing songs—without that artist’s consent. This isn’t just a slogan; it’s a licensing policy, an advocacy position, and a deal-making leverage all rolled into one. After the Sora 2 disaster, I have to believe that OpenAI is at the top of the list.

Here’s the memo:

Dear Colleagues,

I am writing today to update you on the progress that we are making on our efforts to take advantage of the developing commercial opportunities presented by Gen AI technology for the benefit of all our artists and songwriters.

I want to address three specific topics:

Responsible Gen AI company and product agreements; How our artists can participate; and What we are doing to encourage responsible AI public policies.

UMG is playing a pioneering role in fostering AI’s enormous potential. While our progress is significant, the speed at which this technology is developing makes it important that you are all continually updated on our efforts and well-versed on the strategy and approach.

The foundation of what we’re doing is the belief that together, we can foster a healthy commercial AI ecosystem in which artists, songwriters, music companies and technology companies can all flourish together.

NEW AGREEMENTS

To explore the varied opportunities and determine the best approaches, we have been working with AI developers to put their ideas to the test. In fact, we were the first company to enter into AI-related agreements with companies ranging from major platforms such as YouTube, TikTok and Meta to emerging entrepreneurs such as BandLab, Soundlabs, and more. Both creatively and commercially our portfolio of AI partnerships continues to expand.

Very recently, Universal Music Japan announced an agreement with KDDI, a leading Japanese telecommunications company, to develop new music experiences for fans and artists using Gen AI. And we are very actively engaged with nearly a dozen different companies on significant new products and service plans that hold promise for a dramatic expansion of the AI music landscape. Further, we’re seeing other related advancements. While just scratching the surface of AI’s enormous potential, Spotify’s recent integration with ChatGPT offers a pathway to move fluidly from query and discovery to enjoyment of music—and all within a monetized ecosystem.

HOW OUR ARTISTS CAN PARTICIPATE

Based on what we’ve done with our AI partners to date, and the new discussions that are underway, we can unequivocally say that AI has the potential to deliver creative tools that will enable us to connect our artists with their fans in new ways—and with advanced capability on a scale we’ve never encountered.

Further, I believe that Agentic AI, which dynamically employs complex reasoning and adaptation, has the potential to revolutionize how fans interact with and discover music.

I know that we will successfully navigate as well as seize these opportunities and that these new products could constitute a significant source of new future revenue for artists and songwriters.

We will be actively engaged in discussing all of these developments with the entire creative community.

While some of the biggest opportunities will require further exploration, we are excited by the compelling AI models we’re seeing emerge.

We will only consider advancing AI products based on models that are trained responsibly. That is why we have entered into agreements with AI developers such as ProRata and KLAY, among others, and are in discussions with numerous additional like-minded companies whose products provide accurate attribution and tools which empower and compensate artists—products that both protect music and enhance its monetization.

And to be clear—and this is very important—we will NOT license any model that uses an artist’s voice or generates new songs which incorporate an artist’s existing songs without their consent.

New AI products will be joined by many other similar ones that will soon be coming to market, and we have established teams throughout UMG that will be working with artists and their representatives to bring these opportunities directly to them.

RESPONSIBLE PUBLIC POLICIES COVERING AI

We remain acutely aware of the fact that large and powerful AI companies are pressuring governments around the world to legitimize the training of AI technology on copyrighted material without owner consent or compensation, among other proposals.

To be clear: all these misguided proposals amount to nothing more than the unauthorized (and, we believe, illegal) exploitation of the rights and property of creative artists.

In addition, we are acting in the marketplace to see our partners embrace responsible and ethical AI policies and we’re proud of the progress being made there. For example, having accurately predicted the rapid rise of AI “slop” on streaming platforms, in 2023 we introduced Artist-Centric principles to combat what is essentially platform pollution. Since then, many of our platform partners have made significant progress in putting in place measures to address the diversion of royalties, infringement and fraud—all to the benefit of the entire music ecosystem.

We commend our partners for taking action to address this urgent issue, consistent with our Artist-Centric approach. Further, we recently announced an agreement with SoundPatrol, a new company led by Stanford scientists that employs patented technology to protect artists’ work from unauthorized use in AI music generators.

We are confident that by displaying our willingness as a community to embrace those commercial AI models which value and enhance human artistry, we are demonstrating that market-based solutions promoting innovation are the answer.

LEADING THE WAY FORWARD

So, as we work to assure safeguards for artists, we will help lead the way forward, which is why we are exploring and finding innovative ways to use this revolutionary technology to create new commercial opportunities for artists and songwriters while simultaneously aiding and protecting human creativity.

I’m very excited about the products we’re seeing and what the future holds. I will update you all further on our progress.

Lucian

Mr. Grainge’s position reframes the conversation from “Can we scrape?” to How do we get consent and compensate? That shift matters because AI that clones voices or reconstitutes catalog works is not a neutral utility—it’s a market participant competing with human creators and the rights they rely on.

If everything is “transformative” then nothing is protected—and that guts not just copyright, but artists’ name–image–likeness (NIL), right of publicity and in some jurisdictions, moral rights. A scrape-first, justify-later posture erases ownership, antagonizes creators living and dead, and makes catalogs unpriceable. Why would Universal—or any other rightsholder—partner with a company that treats works and identity as free training fuel? What’s great about Lucian’s statement is he’s putting a flag in the ground: the industry leader will not do business with bad actors, regardless of the consequences.

What This Means in Practice

  1. Consent as the gate. Voice clones and “new songs” derived from existing songs require affirmative artist approval—full stop.
  2. Provenance as the standard. AI firms that want first-party deals must prove lawful ingestion, audited datasets, and enforceable guardrails against impersonation.
  3. Aligned incentives. Where consent exists, there’s room for discovery tools, creator utilities, and new revenue streams; where it doesn’t, there’s no deal.

Watermarks and “AI-generated” labels don’t cure false endorsement, right-of-publicity violations, or market substitution. Platforms that design, market, or profit from celebrity emulation without consent aren’t innovating—they’re externalizing legal and ethical risk onto artists.

Moral Rights: Why This Resonates Globally

Universal’s consent-first stance will resonate in moral-rights jurisdictions where authors and performers hold inalienable rights of attribution and integrity (e.g., France’s droit moral, Germany’s Urheberpersönlichkeitsrecht). AI voice clones and “sound-alike” outputs can misattribute authorship, distort a creator’s artistic identity, or subject their work to derogatory treatment—classic moral-rights harms. Because many countries recognize post-mortem moral rights and performers’ neighboring rights, the “no consent, no license” rule is not just good governance—it’s internationally compatible rights stewardship.

Industry Leadership vs. the “Opt-Out” Mirage

It is absolutely critical that the industry leader actively opposes the absurd “opt-out” gambit and other sleights of hand Big Technocrats are pushing to drive a Mack truck through so-called text-and-data-mining loopholes. Their playbook is simple: legitimize mass training on copyrighted works first, then dare creators to find buried settings or after-the-fact exclusions. That flips property rights on their head and is essentially a retroactive safe harbor,

As Mr. Grainge notes, large AI companies are pressuring governments to bless training on copyrighted material without owner consent or compensation. Those proposals amount to the unauthorized—and unlawful—exploitation of artists’ rights and property. By refusing to play along, Universal isn’t just protecting its catalog; it’s defending the baseline principle that creative labor isn’t scrapable.

Consent or Nothing

Let’s be honest: if AI labs were serious about licensing, we wouldn’t have come one narrow miss away from a U.S. state law AI moratorium triggered by their own overreach. That wasn’t just a safe harbor for copyright infringement, that was a safe harbor for everything from privacy, to consumer protection, to child exploitation, to everything. That’s why it died 99-1 in the Senate, but it was a close run thing,,

And realize, that’s exactly what they want when they are left to their own devices, so to speak. The “opt-out” mirage, the scraping euphemisms, and the rush to codify TDM loopholes all point the same direction—avoid consent and avoid compensation. Universal’s position is the necessary counterweight: consent-first, provenance-audited, revenue-sharing with artists and songwriters (and I would add nonfeatured artists and vocalists) or no deal. Anything less invites regulatory whiplash, a race-to-the-bottom for human creativity, and a permanent breach of trust with artists and their estates.

Reading between the lines, Mr. Grainge has identified AI as both a compelling opportunity and an existential crisis. Let’s see if the others come with him and stare down the bad guys.

And YouTube is monetizing Sora videos

[This post first appeared on Artist Rights Watch]

Senator Cruz Joins the States on AI Safe Harbor Collapse— And the Moratorium Quietly Slinks Away

Silicon Valley Loses Bigly

In a symbolic vote that spoke volumes, the U.S. Senate decisively voted 99–1 to strike the toxic AI safe harbor moratorium from the vote-a-rama for the One Big Beautiful Bill Act (HR 1) according to the AP. Senator Ted Cruz, who had previously actively supported the measure, actually joined the bipartisan chorus in stripping it — an acknowledgment that the proposal had become politically radioactive.

To recap, the AI moratorium would have barred states from regulating artificial intelligence for up to 10 years, tying access to broadband and infrastructure funds to compliance. It triggered an immediate backlash: Republican governors, state attorneys general, parents’ groups, civil liberties organizations, and even independent artists condemned it as a blatant handout to Big Tech with yet another rent-seeking safe harbor.

Marsha Blackburn and Maria Cantwell to the Rescue

Credit where it’s due: Senator Marsha Blackburn (R–TN) was the linchpin in the Senate, working across the aisle with Sen. Maria Cantwell to introduce the amendment that finally killed the provision. Blackburn’s credibility with conservative and tech-wary voters gave other Republicans room to move — and once the tide turned, it became a rout. Her leadership was key to sending the signal to her Republican colleagues–including Senator Cruz–that this wasn’t a hill to die on.

Top Cover from President Trump?

But stripping the moratorium wasn’t just a Senate rebellion. This kind of reversal in must-pass, triple whip legislation doesn’t happen without top cover from the White House, and in all likelihood, Donald Trump himself. The provision was never a “last stand” issue in the art of the deal. Trump can plausibly say he gave industry players like Masayoshi Son, Meta, and Google a shot, but the resistance from the states made it politically untenable. It was frankly a poorly handled provision from the start, and there’s little evidence Trump was ever personally invested in it. He certainly didn’t make any public statements about it at all, which is why I always felt it was such an improbable deal point that it was always intended as a bargaining chip whether the staff knew it or not.

One thing is for damn sure–it ain’t coming back in the House which is another way you know you can stick a fork in it despite the churlish shillery types who are sulking off the pitch.

One final note on the process: it’s unfortunate that the Senate Parliamentarian made such a questionable call when she let the AI moratorium survive the Byrd Bath, despite it being so obviously not germane to reconciliation. The provision never should have made it this far in the first place — but oh well. Fortunately, the Senate stepped in and did what the process should have done from the outset.

Now what?

It ain’t over til it’s over. The battle with Silicon Valley may be over on this issue today, but that’s not to say the war is over. The AI moratorium may reappear, reshaped and rebranded, in future bills. But its defeat in the Senate is important. It proves that state-level resistance can still shape federal tech policy, even when it’s buried in omnibus legislation and wrapped in national security rhetoric.

Cruz’s shift wasn’t a betrayal of party leadership — it was a recognition that even in Washington, federalism still matters. And this time, the states — and our champion Marsha — held the line. 

Brava, madam. Well played.

This post first appeared on MusicTechPolicy

Hey Budweiser, You Give Beer a Bad Name

In a world where zero royalties becomes a brag, and one second of music is one second too far.

Let me set the stage: Cannes Lions is the annual eurotrash…to coin a phrase…circular self-congratulatory hype fest at which the biggest brands and ad agencies in the world if not the Solar System spend unreal amounts of money telling each other how wonderful they are. Kind of like HITS Magazine goes to Cannes but with a real budget. And of course the world’s biggest ad platform–guess who–has a major presence there among the bling and yachts of the elites tied up in Yachtville by the Sea. And of course they give each other prizes, and long-time readers know how much we love a good prize, Nyan Cat wise.

Enter the King of Swill, the mind-numbingly stupid Budweiser marketing department. Or as they say in Cannes, Le roi de la bibine.

Credit where it’s due: British Bud-hater and our friend Chris Cooke at CMU flagged this jaw-dropper from Cannes Lions, where Budweiser took home the Grand Prix for its “One‑Second Ad” campaign—a series of ultra-short TikTok clips that featured the one second of hooks from iconic songs. The gimmick? Tease the audience just long enough to trigger nostalgia, then let the internet do the rest. The beer is offensive enough to any right-thinking Englishman, but the theft? Ooh la la.

Cannes Clown

Budweiser’s award-winning brag? “Zero ads were skipped. $0 spent on music right$.” Yes, that’s correct–“right$”.

That quote should hang in a museum of creative disinformation.

There’s an old copyright myth known as the “7‑second rule”—the idea that using a short snippet of a song (usually under 7 seconds) doesn’t require a license. It’s pure urban legend. No court has ever upheld such a rule, but it sticks around because music users desperately want it to be true. Budweiser didn’t just flirt with the myth—it took the myth on a date to Short Attention Span Theater, built an ad campaign around it, and walked away with the biggest prize in advertising to the cheers of Googlers everywhere.

When Theft from artists Becomes a Business Model–again

But maybe this kind of stunt shouldn’t come as a surprise. When the richest corporations in commercial history are openly scraping, mimicking, and monetizing millions of copyrighted works to train AI models—without permission and without payment—and so far getting away with it, it sends a signal. A signal that says: “This isn’t theft, it’s innovation.” Yeah, that’s the ticket. Give them a prize.

So of course Budweiser’s corporate brethren start thinking: “Me too.

As Austin songwriter Guy Forsyth wrote in Long Long Time“Americans are freedom-loving people, and nothing says freedom like getting away with it.” That lyric, in this context, resonates like a manifesto for scumbags.

The Immorality of Virality

For artists and the musicians and vocalists who created the value that Budweiser is extracting, the campaign’s success is a masterclass in bad precedent. It’s one thing to misunderstand copyright; it’s another to market that misunderstanding as a feature. When global brands publicly celebrate not paying for music–in Cannes, of all places—the very tone-deaf foundation of their ad’s emotional resonance sends a corrosive signal to the entire creative economy. And, frankly, to fans.

Oops!… I Did It Again, bragged Budweiser, proudly skipping royalties like it’s Free Fallin’, hoping no one notices they’re just Smooth Criminals playing Cheap Thrills with other people’s work. It’s not Without Me—it’s without paying anyone—because apparently Money for Nothing is still the vibe, and The Sound of Silence is what they expect from artists they’ve ghosted.

Because make no mistake: even one second of a recording can be legally actionable particularly when the intentional infringing conspiracy gets a freaking award for doing it. That’s not just law—it’s basic respect, which is kind of the same thing. Which makes Budweiser’s campaign less of a legal grey area and more of a cultural red flag with a bunch of zeros. Meaning the ultimate jury award from a real jury, not a Cannes jury.

This is the immorality of virality: weaponizing cultural shorthand to score branding points, while erasing the very artists who make those moments recognizable. When the applause dies down in Yachtville, what’s left is a case study in how to win by stealing — not creating.