There’s a new dance in Washington—it’s called the KowTow
Most musicians don’t spend their days thinking about executive orders. But if you care about your rights, your recordings, your royalties, or your community, or even the environment, you need to understand the Trump Administration’s new executive order on artificial intelligence. The order—presented as “Ensuring a National Policy Framework for AI”—is not a national standard at all. It is a blueprint for stripping states of their power, protecting Big Tech from accountability, and centralizing AI authority in the hands of unelected political operatives and venture capitalists. In other words, it’s business as usual for the special interests led by an unelected bureaucrat, Silicon Valley Viceroy and billionaire investor David Sacks who the New York Times recently called out as a walking conflict of interest.
You’ll Hear “National AI Standard.” That’s Fake News. IT’s Silicon valley’s wild west
Supporters of the EO claim Trump is “setting a national framework for AI.” Read it yourself. You won’t find a single policy on: – AI systems stealing copyrights (already proven in court against Anthropic and Meta) – AI systems inducing self-harm in children – Whether Google can build a water‑burning data center or nuclear plant next to your neighborhood
None of that is addressed. Instead, the EO orders the federal government to sue and bully states like Florida and Texas that pass AI safety laws and threatens to cut off broadband funding unless states abandon their democratically enacted protections. They will call this “preemption” which is when federal law overrides conflicting state laws. When Congress (or sometimes a federal agency) occupies a policy area, states lose the ability to enforce different or stricter rules. There is no federal legislation (EOs don’t count), so there can be no “preemption.”
Who Really Wrote This? The Sacks–Thierer Pipeline
This EO reads like it was drafted directly from the talking points of David Sacks and Adam Thierer, the two loudest voices insisting that states must be prohibited from regulating AI. It sounds that way because it was—Trump himself gave all the credit to David Sacks in his signing ceremony.
– Adam Thierer works at Google’s R Street Institute and pushes “permissionless innovation,” meaning companies should be allowed to harm the public before regulation is allowed. – David Sacks is a billionaire Silicon Valley investor from South Africa with hundreds of AI and crypto investments, documented by The New York Times, and stands to profit from deregulation.
Worse, the EO lards itself with references to federal agencies coordinating with the “Special Advisor for AI and Crypto,” who is—yes—David Sacks. That means DOJ, Commerce, Homeland Security, and multiple federal bodies are effectively instructed to route their AI enforcement posture through a private‑sector financier.
The Trump AI Czar—VICEROY Without Senate Confirmation
Sacks is exactly what we have been warning about for months: the unelected Trump AI Czar
He is not Senate‑confirmed. He is not subject to conflict‑of‑interest vetting. He is a billionaire “special government employee” with vast personal financial stakes in the outcome of AI deregulation.
Under the Constitution, you cannot assign significant executive authority to someone who never faced Senate scrutiny. Yet the EO repeatedly implies exactly that.
Even Trump’s MOST LOYAL MAGA Allies Know This Is Wrong
Trump signed the order in a closed ceremony with sycophants and tech investors—not musicians, not unions, not parents, not safety experts, not even one Red State governor.
Even political allies and activists like Mike Davis and Steve Bannon blasted the EO for gutting state powers and centralizing authority in Washington while failing to protect creators. When Bannon and Davis are warning you the order goes too far, that tells you everything you need to know. Well, almost everything.
And Then There’s Ted Cruz
On top of everything else, the one state official in the room was U.S. Senator Ted Cruz of Texas, a state that has led on AI protections for consumers. Cruz sold out Texas musicians while gutting the Constitution—knowing full well exactly what he was doing as a former Supreme Court clerk.
Why It Matters for Musicians
AI isn’t some abstract “tech issue.” It’s about who controls your work, your rights, your economic future. Right now:
– AI systems train on our recordings without consent or compensation. – Major tech companies use federal power to avoid accountability. – The EO protects Silicon Valley elites, not artists, fans or consumers.
This EO doesn’t protect your music, your rights, or your community. It preempts local protections and hands Big Tech a federal shield.
It’s Not a National Standard — It’s a Power Grab
What’s happening isn’t leadership. It’s *regulatory capture dressed as patriotism*. If musicians, unions, state legislators, and everyday Americans don’t push back, this EO will become a legal weapon used to silence state protections and entrench unaccountable AI power.
What David Sacks and his band of thieves is teaching the world is that he learned from Dot Bomb 1.0—the first time around, they didn’t steal enough. If you’re going to steal, steal all of it. Then the government will protect you.
The New York Times published a sprawling investigation into David Sacks’s role as Trump’s A.I. and crypto czar. We’ve talked about David Sacks a few times on these pages. The Times’ piece is remarkable in scope and reporting: a venture capitalist inside the White House, steering chip policy, promoting deregulation, raising money for Trump, hosting administration events through his own podcast brand, and retaining hundreds of A.I. and crypto investments that stand to benefit from his policy work.
But for all its detail, the Times buried the lede.
The bigger story isn’t just ethics violations. or outright financial corruption. It’s that Sacks is simultaneously shaping and shielding the largest regulatory power grab in history: the A.I. moratorium and its preemption structure.
Of all the corrupt anecdotes in the New York Times must read article regarding Viceroy and leading Presidential pardon candidate David Sacks, they left out the whole AI moratorium scam, focusing instead on the more garden variety of self-dealing and outright conflicts of interest that are legion. My bet is that Mr. Sacks reeks so badly that it is hard to know what to leave out. Here’s a couple of examples:
There is a deeper danger that the Times story never addresses: the long-term damage that will outlive David Sacks himself. Even if Sacks eventually faces investigations or prosecution for unrelated financial or securities matters — if he does — the real threat isn’t what happens to him. It’s what happens to the legal architecture he is building right now.
If he succeeds in blocking state-law prosecutions and freezing A.I. liability for a decade, the harms won’t stop when he leaves office. They will metastasize.
Without state enforcement, A.I. companies will face no meaningful accountability for:
child suicide induced by unregulated synthetic content
mass copyright theft embedded into permanent model weights
biometric and voiceprint extraction without consent
data-center sprawl that overwhelms local water, energy, and zoning systems
surveillance architectures exported globally
algorithmic harms that cannot be litigated under preempted state laws
These harms don’t sunset when an administration ends. They calcify. It must also be said that Sacks could face state securities-law liability — including fraud, undisclosed self-dealing, and market-manipulative conflicts tied to his A.I. portfolio — because state blue-sky statutes impose duties possibly stricter than federal law. The A.I. moratorium’s preemption would vaporize these claims, shielding exactly the conduct state regulators are best positioned to police. No wonder he’s so committed to sneaking it into federal law.
The moratorium Sacks is pushing would prevent states from acting at the very moment when they are the only entities with the political will and proximity to regulate A.I. on the ground. If he succeeds, the damage will last long after Sacks has left his government role — long after his podcast fades, long after his investment portfolio exits, long after any legal consequences he might face.
The public will be living inside the system he designed.
There is one final point the public needs to understand. DavidSacksis not an anomaly. Sacks is to Trump what Eric Schmidt was to Biden: the industry’s designated emissary, embedded inside the White House to shape federal technology policy from the inside out. Swap the party labels and the personnel change, but the structural function remains the same. Remember, Schmidt bragged about writing the Biden AI executive order.
So don’t think that if Sacks is pushed out, investigated, discredited, or even prosecuted one day — if he is — that the problem disappears. You don’t eliminate regulatory capture by removing the latest avatar of it. The next administration will simply install a different billionaire with a different portfolio and the same incentives: protect industry, weaken oversight, preempt the states, and expand the commercial reach of the companies they came in with.
The danger is not David Sacks the individual. The danger is the revolving door that lets tech titans write national A.I. policy while holding the assets that benefit from it. As much as Trump complains of the “deep state,” he’s doing his best to create the deepest of deep states.
Until that underlying structure changes, it won’t matter whether it’s Sacks, Schmidt, Thiel, Musk, Palihapitiya, or the next “technocratic savior.”
The system will keep producing them — and the public will keep paying the price. For as Sophocles taught us, it is not in our power to escape the curse.
The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.
Ted Turns Texas Glowing
99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:
It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.
Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.
Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.
The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”
Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.
In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).
This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.
The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.
AI Moratorium Lobbying Explainer for Your Electric Bill
Here are the key shilleries pushing the federal AI moratorium and their backers:
Promote ‘uniform national standards’ in Congressional meetings
Operate through and alongside trade groups
PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS
The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.
An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.
THE TIMING COULD NOT BE MORE INSULTING
Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.
And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.
States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.
A CALL TO ACTION
Tell your Members of Congress: No AI moratorium in the NDAA. No backroom preemption. No Big Tech giveaways in the defense budget.
When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.
Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.
The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.
This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.
And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.
Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.
In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.
The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.
Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about “transformative fair use.” Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will. That doesn’t mean they won’t keep trying.
A “right to train” is not found anywhere in the Copyright Act or any other law. It’s also not found in court cases on fair-use that the AI lobby leans on. It’s a slogan and it’s spin, not a statute. What we’re watching is a coordinated effort by the major AI labs to manufacture a safe harbor through litigation — using every favorable fair-use ruling to carve out what looks like a precedent for blanket immunity. Then they’ll get one of their shills in Congress or a state legislature to introduce legislation as though a “right to train” was there all along.
How the “Right to Train” Narrative Took Shape
The phrase first appeared in tech-industry briefs and policy papers describing model training as a kind of “machine learning fair use.” The logic goes like this: since humans can read a book and learn from it, a machine should be able to “learn” from the same book without permission.
That analogy collapses under scrutiny. First of all, humans typically bought the book they read or checked it out from a library. Humans don’t make bit-for-bit copies of everything they read, and they don’t reproduce or monetize those copies at global scale. AI training does exactly that — storing expressive works inside model weights, then re-deploying them to generate derivative material.
But the repetitive chant of the term “right to train” serves a purpose: to normalize the idea that AI companies are entitled to scrape, store, and replicate human creativity without consent. Each time a court finds a narrow fair-use defense in a context that doesn’t involve piracy or derivative outputs (because they lose on training on stolen goods like in the Anthropic and Meta cases), the labs and their shills trumpet it as proof that training itself is categorically protected. It isn’t and no court has ever ruled that it is and likely never will.
Fair Use Is Not a Safe Harbor
Fair use is a case-by-case defense to copyright infringement, not a standing permission slip. It weighs purpose, amount, transformation, and market effect — all of which vary depending on the facts. But AI companies are trying to convert that flexible doctrine into a brand new safe harbor: a default assumption that all training is fair use unless proven otherwise. They love a safe harbor in Silicon Valley and routinely abuse them like Section 230, the DMCA and Title I of the Music Modernization Act.
That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it. A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word “training.”
Even if courts were to recognize limited fair use for truly lawful training, that protection would never extend to datasets built from pirate websites, torrent mirrors, or unlicensed repositories like Sci-Hub, Z-Library, or Common Crawl’s scraped paywalls—more on the scummy Common Crawl another time. The DMCA’s safe harbors don’t protect platforms that knowingly host stolen goods — and neither would any hypothetical “right to train.”
Yet a safe harbor is precisely what the labs are seeking: a doctrine that would retroactively bless mass infringement like Spotify got in the Music Modernization Act and preempt accountability for the sources they used.
And not only do they want a safe harbor — they want it for free. No licenses, no royalties, no dataset audits, no compensation. What do they want? FREE STUFF. When do they want it? NOW! Just blanket immunity, subsidized by every artist, author, and journalist whose work they ingested without consent or payment.
The Real Motive Behind the Push
The reason AI companies need a “right to train” is simple: without it, they have no reliable legal basis for the data that powers their models and they are too cheap to pay and to careless to take the time to license. Most of their “training corpora” were built years before any licenses were contemplated — scraped from the open web, archives, and pirate libraries under the assumption that no one would notice.
This is particularly important for books. Training on books is vital for AI models because books provide structured, high-quality language, complex reasoning, and deep cultural context. They teach models coherence, logic, and creativity that short-form internet text lacks. Without books, AI systems lose depth, nuance, and the ability to understand sustained argument, narrative, and style.
Without books, AI labs have no business. That’s why they steal books. Very simple, really.
Now that creators are suing, the labs are trying to reverse-engineer legitimacy. They want to turn each court ruling that nudges fair use in their direction into a brick in the wall of a judicially-manufactured safe harbor — one that Congress never passed and rights-holders never agreed to and would never agree to.
But safe harbors are meant to protect good-faith intermediaries who act responsibly once notified of infringement. AI labs are not intermediaries; they are direct beneficiaries. Their entire business model depends on retaining the stolen data permanently in model weights that cannot be erased. The “right to train” is not a right — it’s a rhetorical weapon to make theft sound inevitable and a demand from the richest corporations in commercial history for yet another government-sponsored subsidy of infringement by bad actors.
The Myth of the Inevitable Machine
AI’s defenders claim that training on copyrighted works is as natural as human learning. But there’s nothing natural about hoarding other people’s labor at planetary scale and calling it innovation. The truth is simpler: the “right to train” is a marketing term invented to launder unlawful data practices into respectability.
If courts and lawmakers don’t call it what it is — a manufactured, safe harbor for piracy to benefit some of the biggest free riders who ever snarfed down corporate welfare — then history will repeat itself. What Grokster tried to do with distribution, AI is trying to do with cognition: privatize the world’s creative output and claim immunity for the theft.
You Don’t Need to Steal Art to Cure Cancer: Why Ed Newton-Rex Is Right About AI and Copyright
Ed Newton-Rex said the quiet truth out loud: you don’t need to scrape the world’s creative works to build AI that saves lives. Or even beat the Chinese Communist Party.
It’s a myth that AI “has to” ingest novels and pop lyrics to learn language. Models acquire syntax, semantics, and pragmatics from any large, diverse corpus of natural language. That includes transcribed speech, forums, technical manuals, government documents, Wikipedia, scientific papers, and licensed conversational data. Speech systems learn from audio–text pairs, not necessarily fiction; text models learn distributional patterns wherever language appears. Of course, literary works can enrich style, but they’re not necessary for competence: instruction tuning, dialogue data, and domain corpora yield fluent models without raiding copyrighted art. In short, creative literature is optional seasoning, not the core ingredient for teaching machines to “speak.”
Google’s new cancer-therapy paper proves the point. Their model wasn’t trained on novels, lyrics, or paintings. It was trained responsibly on scientific data. And yet it achieved real, measurable progress in biomedical research. That simple fact dismantles one of Silicon Valley’s most persistent myths: that copyright is somehow an obstacle to innovation.
You don’t need to train on Joni Mitchell to discover a new gene pathway. You don’t need to ingest John Coltrane to find a drug target. AI used for science can thrive within the guardrails of copyright because science itself already has its own open-data ecosystems—peer-reviewed, licensed, and transparent.
The companies like Anthropic and Meta insisting that “fair use” covers mass ingestion of stolen creative works aren’t curing diseases; they’re training entertainment engines. They’re ripping off artists’ livelihoods to make commercial chatbots, story generators, and synthetic-voice platforms designed to compete against the very creators whose works they exploited. That’s not innovation—it’s market capture through appropriation.
They do it for reasons old as time—they do it for the money.
The ethical divide is clear:
AI for discovery builds on licensed scientific data.
AI for mimicry plunders culture to sell imitation.
We should celebrate the first and regulate the second. Upholding copyright and requiring provenance disclosures doesn’t hinder progress—it restores integrity. The same society that applauds AI in medical breakthroughs can also insist that creative industries remain human-centered and law-abiding. Civil-military fusion doesn’t imply that there’s only two ingredients in the gumbo of life.
If Google can advance cancer research without stealing art, so can everyone else and so can Google keep different rules for the entertainment side of their business or investment portfolio. The choice isn’t between curing cancer and protecting artists—it’s between honesty and opportunism. The repeated whinging of AI labs about “because China” would be a lot more believable if they used their political influence to get the CCP to release Hong Kong activist Jimmy Lai from stir. We can join Jimmy and his amazingly brave son Sebastian and say “because China”, too. #FreeJimmyLai
Universal Music Group’s CEO Sir Lucian Grainge has put the industry on notice in an internal memo to Universal employees: UMG will not license any AI model that uses an artist’s voice—or generates new songs incorporating an artist’s existing songs—without that artist’s consent. This isn’t just a slogan; it’s a licensing policy, an advocacy position, and a deal-making leverage all rolled into one. After the Sora 2 disaster, I have to believe that OpenAI is at the top of the list.
Here’s the memo:
Dear Colleagues,
I am writing today to update you on the progress that we are making on our efforts to take advantage of the developing commercial opportunities presented by Gen AI technology for the benefit of all our artists and songwriters.
I want to address three specific topics:
Responsible Gen AI company and product agreements; How our artists can participate; and What we are doing to encourage responsible AI public policies.
UMG is playing a pioneering role in fostering AI’s enormous potential. While our progress is significant, the speed at which this technology is developing makes it important that you are all continually updated on our efforts and well-versed on the strategy and approach.
The foundation of what we’re doing is the belief that together, we can foster a healthy commercial AI ecosystem in which artists, songwriters, music companies and technology companies can all flourish together.
NEW AGREEMENTS
To explore the varied opportunities and determine the best approaches, we have been working with AI developers to put their ideas to the test. In fact, we were the first company to enter into AI-related agreements with companies ranging from major platforms such as YouTube, TikTok and Meta to emerging entrepreneurs such as BandLab, Soundlabs, and more. Both creatively and commercially our portfolio of AI partnerships continues to expand.
Very recently, Universal Music Japan announced an agreement with KDDI, a leading Japanese telecommunications company, to develop new music experiences for fans and artists using Gen AI. And we are very actively engaged with nearly a dozen different companies on significant new products and service plans that hold promise for a dramatic expansion of the AI music landscape. Further, we’re seeing other related advancements. While just scratching the surface of AI’s enormous potential, Spotify’s recent integration with ChatGPT offers a pathway to move fluidly from query and discovery to enjoyment of music—and all within a monetized ecosystem.
HOW OUR ARTISTS CAN PARTICIPATE
Based on what we’ve done with our AI partners to date, and the new discussions that are underway, we can unequivocally say that AI has the potential to deliver creative tools that will enable us to connect our artists with their fans in new ways—and with advanced capability on a scale we’ve never encountered.
Further, I believe that Agentic AI, which dynamically employs complex reasoning and adaptation, has the potential to revolutionize how fans interact with and discover music.
I know that we will successfully navigate as well as seize these opportunities and that these new products could constitute a significant source of new future revenue for artists and songwriters.
We will be actively engaged in discussing all of these developments with the entire creative community.
While some of the biggest opportunities will require further exploration, we are excited by the compelling AI models we’re seeing emerge.
We will only consider advancing AI products based on models that are trained responsibly. That is why we have entered into agreements with AI developers such as ProRata and KLAY, among others, and are in discussions with numerous additional like-minded companies whose products provide accurate attribution and tools which empower and compensate artists—products that both protect music and enhance its monetization.
And to be clear—and this is very important—we will NOT license any model that uses an artist’s voice or generates new songs which incorporate an artist’s existing songs without their consent.
New AI products will be joined by many other similar ones that will soon be coming to market, and we have established teams throughout UMG that will be working with artists and their representatives to bring these opportunities directly to them.
RESPONSIBLE PUBLIC POLICIES COVERING AI
We remain acutely aware of the fact that large and powerful AI companies are pressuring governments around the world to legitimize the training of AI technology on copyrighted material without owner consent or compensation, among other proposals.
To be clear: all these misguided proposals amount to nothing more than the unauthorized (and, we believe, illegal) exploitation of the rights and property of creative artists.
In addition, we are acting in the marketplace to see our partners embrace responsible and ethical AI policies and we’re proud of the progress being made there. For example, having accurately predicted the rapid rise of AI “slop” on streaming platforms, in 2023 we introduced Artist-Centric principles to combat what is essentially platform pollution. Since then, many of our platform partners have made significant progress in putting in place measures to address the diversion of royalties, infringement and fraud—all to the benefit of the entire music ecosystem.
We commend our partners for taking action to address this urgent issue, consistent with our Artist-Centric approach. Further, we recently announced an agreement with SoundPatrol, a new company led by Stanford scientists that employs patented technology to protect artists’ work from unauthorized use in AI music generators.
We are confident that by displaying our willingness as a community to embrace those commercial AI models which value and enhance human artistry, we are demonstrating that market-based solutions promoting innovation are the answer.
LEADING THE WAY FORWARD
So, as we work to assure safeguards for artists, we will help lead the way forward, which is why we are exploring and finding innovative ways to use this revolutionary technology to create new commercial opportunities for artists and songwriters while simultaneously aiding and protecting human creativity.
I’m very excited about the products we’re seeing and what the future holds. I will update you all further on our progress.
Lucian
Mr. Grainge’s position reframes the conversation from “Can we scrape?” to “How do we get consent and compensate?” That shift matters because AI that clones voices or reconstitutes catalog works is not a neutral utility—it’s a market participant competing with human creators and the rights they rely on.
If everything is “transformative” then nothing is protected—and that guts not just copyright, but artists’ name–image–likeness (NIL), right of publicity and in some jurisdictions, moral rights. A scrape-first, justify-later posture erases ownership, antagonizes creators living and dead, and makes catalogs unpriceable. Why would Universal—or any other rightsholder—partner with a company that treats works and identity as free training fuel? What’s great about Lucian’s statement is he’s putting a flag in the ground: the industry leader will not do business with bad actors, regardless of the consequences.
What This Means in Practice
Consent as the gate. Voice clones and “new songs” derived from existing songs require affirmative artist approval—full stop.
Provenance as the standard. AI firms that want first-party deals must prove lawful ingestion, audited datasets, and enforceable guardrails against impersonation.
Aligned incentives. Where consent exists, there’s room for discovery tools, creator utilities, and new revenue streams; where it doesn’t, there’s no deal.
Watermarks and “AI-generated” labels don’t cure false endorsement, right-of-publicity violations, or market substitution. Platforms that design, market, or profit from celebrity emulation without consent aren’t innovating—they’re externalizing legal and ethical risk onto artists.
Moral Rights: Why This Resonates Globally
Universal’s consent-first stance will resonate in moral-rights jurisdictions where authors and performers hold inalienable rights of attribution and integrity (e.g., France’s droit moral, Germany’s Urheberpersönlichkeitsrecht). AI voice clones and “sound-alike” outputs can misattribute authorship, distort a creator’s artistic identity, or subject their work to derogatory treatment—classic moral-rights harms. Because many countries recognize post-mortem moral rights and performers’ neighboring rights, the “no consent, no license” rule is not just good governance—it’s internationally compatible rights stewardship.
Industry Leadership vs. the “Opt-Out” Mirage
It is absolutely critical that the industry leader actively opposes the absurd “opt-out” gambit and other sleights of hand Big Technocrats are pushing to drive a Mack truck through so-called text-and-data-mining loopholes. Their playbook is simple: legitimize mass training on copyrighted works first, then dare creators to find buried settings or after-the-fact exclusions. That flips property rights on their head and is essentially a retroactive safe harbor,
As Mr. Grainge notes, large AI companies are pressuring governments to bless training on copyrighted material without owner consent or compensation. Those proposals amount to the unauthorized—and unlawful—exploitation of artists’ rights and property. By refusing to play along, Universal isn’t just protecting its catalog; it’s defending the baseline principle that creative labor isn’t scrapable.
Consent or Nothing
Let’s be honest: if AI labs were serious about licensing, we wouldn’t have come one narrow miss away from a U.S. state law AI moratorium triggered by their own overreach. That wasn’t just a safe harbor for copyright infringement, that was a safe harbor for everything from privacy, to consumer protection, to child exploitation, to everything. That’s why it died 99-1 in the Senate, but it was a close run thing,,
And realize, that’s exactly what they want when they are left to their own devices, so to speak. The “opt-out” mirage, the scraping euphemisms, and the rush to codify TDM loopholes all point the same direction—avoid consent and avoid compensation. Universal’s position is the necessary counterweight: consent-first, provenance-audited, revenue-sharing with artists and songwriters (and I would add nonfeatured artists and vocalists) or no deal. Anything less invites regulatory whiplash, a race-to-the-bottom for human creativity, and a permanent breach of trust with artists and their estates.
Reading between the lines, Mr. Grainge has identified AI as both a compelling opportunity and an existential crisis. Let’s see if the others come with him and stare down the bad guys.
UK artists including Paul McCartney, Kate Bush and Elton John urged Prime Minister Keir Starmer to protect creators before a UK-US tech pact tied to President Donald Trump’s visit. In a letter, they accuse Labour of blocking transparency rules that would force AI firms to disclose training data and warn proposals enabling training on copyrighted works without permission could let an artist’s life’s work be stolen. Citing human rights documents like the International Covenant on Economic, Social and Cultural Rights, the Berne convention and the European Convention on Human Rights, they frame the issue as a human-rights breach. Peer Beeban Kidron criticised US-heavy working groups. Government says no decision yet and promises a report by March.
You must be logged in to post a comment.