Say No to Suno

Late last year, thieves disguised as construction workers broke into the Louvre during broad daylight, grabbed more than $100 million worth of crown jewels, and roared off on their motorbikes into the busy streets of Paris. While some of those thieves were later arrested, the jewelry they stole has yet to be recovered, and many fear those historic works of artistry have already been recut, reset, and resold.

Closer to home, but no less nefarious, is the brazen rip-off of artists enabled by irresponsible AI, whose profiteers are recutting, remixing, and reselling original works of artistry as something new.  The hijacking of the world’s entire treasure-trove of music floods platforms with AI slop and dilutes the royalty pools of legitimate artists from whose music this slop is derived. 

Meanwhile, those who are promoting this new business model are operating in broad daylight, too – minus the yellow safety vests.  That is AI music company Suno, the brazen “smash and grab” platform whose “Make it Music” ad campaign suggests that the most personal and meaningful forms of music can now be fabricated by their unauthorized AI platform machinery trained on human artists’ work. 

How significant is this activity?  Publicly revealed data says Suno is used to generate 7 million tracks a day, a massive quantity that suggests a dominant market share of AI tracks.  According to recent reports, Deezer “deems 85% of streams of fully AI-generated tracks [on its service] to be fraudulent,” and that such tracks include outputs from major generative models.  As JP Morgan’s analysts said, Deezer’s data “should be indicative of the broader market.”  Suno has yet to demonstrate persuasively that its platform does not, in practice, serve as a scalable input into streaming-fraud schemes — raising a serious concern that Suno has, in effect, become a fraud-fodder factory on an industrial scale.

In a February 2 LinkedIn post, Paul Sinclair, Suno’s Chief Music Officer, claims that his company’s platform is about “empowerment” that enables “billions of fans to create and play with music.”  He argues that closed systems are “walled gardens” that deny people access to the full joy of music.

Ironically, Sinclair’s choice of analogy undermines his own argument.  Ask yourself: just why are most gardens surrounded by fences or walls?  To keep out rabbits, deer, raccoons and wild pigs seeking a free lunch.  We cultivate, nurture and protect our gardens precisely because that makes them much more productive over the long run.

While Sinclair may be loath to admit it, AI is fundamentally different from past disruptive innovations in the music industry.  The phonograph, cassettes, CDs, MP3s, downloads, streaming – all these technologies were about the reproduction and distribution of creative work.  By contrast, irresponsible AI like Suno appropriates and plunders such creative work while undermining the commercial ecosystem for artists.

Think back to the days of Napster.  What brought the music industry back from the ruinous abyss of unfettered digital piracy?  It was the very “closed systems” that Sinclair derides as exclusionary.  At least streaming platforms maintain access controls and content management systems that enable creator compensation, even if the economic outcomes for many creators remain inadequate.  Should we be against Apple Music, Spotify, Deezer, YouTube Music, and Amazon Music?  What about Netflix, Disney+ and HBO, too, while we’re at it?

At its core, Sinclair’s argument is just a tired remix of the old trope that “information wants to be free.”  What that really means is: “We want your music for free.”

Artists need to understand Suno’s game.  They are not putting technology in the service of artists; they are putting artists in the service of their technology.  Every time artists’ creations are used by the platform, those creations have just unwittingly been contributed to the creation of endless derivatives of artists’ own work, not to mention AI slop, with limited or no remuneration back to the human creators.  Suno built its business on our backs, scraping the world’s cultural output without permission, then competing against the very works exploited.

It’s also important to keep in mind that using Suno to generate audio output calls into question the copyrightability of whatever Suno creates.  Most countries around the world including the US Copyright Office have been clear that generative AI outputs are largely ineligible for a copyright – meaning the economic value of the Suno creation lies solely with Suno, not with the artist using it.  The only ones gaining empowerment from Suno are Suno themselves.

Many in our community are embracing responsible AI as a tool for creation, and as a means for fans to explore and interact with our artistry.  That’s wonderful.  But it’s not the same as creating an environment where AI-generated works sourced from our music are mass distributed to dilute our royalties or, worse yet, reward those actively seeking to commit fraud.  Artists need to know the difference – all AI platforms are not the same, and Suno, which is being sued for copyright infringement, is not a platform artists should trust.

Responsible AI-generated music must evolve within a framework that respects and remunerates artists, enhances human creativity rather than supplants it, and empowers fans to engage with the music they love.  At the same time, AI services must preclude mass distribution of slop and prevent fraudsters from destroying the very ecosystem that has been built to reward and sustain artists and audiences alike.

All of us, including billions of music fans, share an urgent, deep and abiding interest in protecting and rewarding human genius, even as AI continues to change our industry and the world in unimaginable ways.  So in 2026, even as the Louvre continues to revamp its own approach to security, we in the arts must rise to confront those who would “smash-and-grab” our creativity for their own benefit.

Together, while embracing innovation, we must work to establish more effective safeguards – both legal and technological – that better promote and protect all creative artists, our intellectual property, and the spark of human genius.

Say no to Suno. Say yes to the beauty and bounty of the gardens that feed us all.

Signed: 

Ron Gubitz, Executive Director, Music Artist Coalition

Helienne Lindvall, Songwriter and President, European Composer and Songwriter Alliance

David C. Lowery, Artist and Editor The Trichordist

Tift Merritt artist, Practitioner in Residence, Duke University and Artist Rights Alliance Board Member

Blake Morgan, artist, producer, and President of ECR Music Group.

Abby North, President, North Music Group

Chris Castle, Artist Rights Institute

Synthetic Emotion from The Music Department: Suno’s Unsettling Ad Campaign and the Return of Orwell’s Machine-Made Culture from 1984

In George Orwell’s 1984, the “versificator” was a machine designed to produce poetry, songs, and sentimental verse synthetically, without human thought or feeling. Its purpose was not artistic expression but industrial-scale cultural production—filling the air with endless, disposable content to occupy attention and shape perception. Nearly a century later, the comparison to modern generative music systems such as Suno is difficult to ignore. While the technologies differ dramatically, the underlying question is strikingly similar: what happens when music is produced by machines at scale rather than by human experience?

Orwell’s versificator was built for scale, not meaning (reminding you of anyone?). It generated formulaic songs for the masses, optimized for emotional familiarity rather than originality. Suno, by contrast, uses sophisticated machine learning trained on vast corpora of human-created music to generate complete recordings on demand that would be the envy of Big Brother’s Music Department. Suno can reportedly generate millions of tracks per day, a level of output impossible in any human-centered musical economy. When music becomes infinitely reproducible, the limiting factor shifts from creation to distribution and attention—precisely the dynamic Orwell imagined.

Nothing captures the versificator analogy more vividly than Suno’s own dystopian-style “first kiss” advertisingcampaign. In one widely circulated spot, the product is promoted through a stylized, synthetic emotional narrative that emphasizes instant, machine-generated musical cliche creation untethered from human musicians, vocalists, or composers. The message is not about artistic struggle, collaboration, or lived expression—it is about mediocre frictionless production. The ad unintentionally echoes Orwell’s warning: when culture can be manufactured instantly, expression becomes simulation. And on top of it, those ads are just downright creepy.

The versificator also blurred authorship. In 1984, no individual poet existed behind the machine’s output; creativity was subsumed into a system. Suno raises a comparable question. If a system trained on thousands or millions of human performances produces a new track, where does authorship reside? With the user who typed a prompt? With the engineers who built the model? With the countless musicians whose expressive choices shaped the training data? Or nowhere at all? This diffusion of authorship challenges long-standing cultural and legal assumptions about what it means to “create” music.

Another parallel lies in standardization. The versificator produced content that was emotionally predictable—pleasant, familiar, subservient and safe. Generative music systems often display a similar gravitational pull toward stylistic averages embedded in their training data that has been averaged into pablum. The result can be competent, even polished output that nevertheless lacks the unpredictability, risk, and individual voice associated with human artistry. Orwell’s concern was not that machine-generated culture would be bad, but that it would be flattened—replacing lived expression with algorithmic imitation. Substitutional, not substantial.

There is also a structural similarity in scale and economics. The versificator’s value to The Party lay in its ability to replace human labor in cultural production and to force the creation of projects that humans would find too creepy. Suno and similar systems raise analogous questions for modern musicians, particularly session players and composers whose work historically formed the backbone of recorded music. When a single system can generate instrumental tracks, arrangements, and stylistic variations instantly, the economic pressure on human contributors becomes obvious. Orwell imagined machines replacing poets; today the substitution pressure may fall first on instrumental performance, arrangement, sound designer, and production roles.

Yet the comparison has limits, and those limits matter. The versificator was a tool of centralized control in a dystopian state, designed to narrow human thought. Suno operates in a pluralistic technological environment where many artists themselves experiment with AI as a creative instrument. Unlike Orwell’s machine, generative music systems can be used collaboratively, interactively, and sometimes in ways that expand rather than suppress creative exploration. The technology is not inherently dystopian; its impact depends on how institutions, markets, and creators choose to shape it.

A deeper difference lies in intention. Orwell’s versificator was never meant to create art; it was meant to simulate it. Modern generative music systems are often framed as tools that can assist, augment, or inspire human creativity. Some artists use AI to prototype ideas, explore unfamiliar styles, or generate textures that would be difficult to produce otherwise. In these contexts, the machine functions less like a replacement and more like a new instrument—one whose cultural role is still evolving.

Still, Orwell’s versificator is highly relevant to understanding Suno’s corporate direction. When cultural production becomes industrialized, quantity can overwhelm meaning. The risk is not merely that machine-generated music exists, but that its scale reshapes attention, value, and recognition. If millions of synthetic tracks flood listening environments as is happening with some large DSPs, the signal of individual human expression may become harder to perceive—even if human creativity continues to exist beneath the surface.

The comparison between Suno and the versificator symbolizes the moment when technology challenges the boundaries of authorship, creativity, and cultural labor. Orwell warned of a world where machines produced endless culture without human voice. Today’s question is subtler: can society integrate generative systems in ways that preserve the distinctiveness of human expression rather than dissolving it into algorithmic slop?

The answer will not come from technology alone. It will depend on choices—legal, cultural, and economic—about how machine-generated music is labeled, valued, and integrated into the broader creative ecosystem. Orwell imagined a future where the machine replaced the poet. The task now is to ensure that, even in an age of generative AI, the humans remains audible.

Stealing Isn’t Innovation!

Don’t let the so-called “AI czar” sell you the idea that changing the law to legalize taking artists’ work without consent is innovation. It isn’t.

Innovation creates new value. The AI boondoggle takes existing value from creators and communities and hands it to a small number of tech companies—without permission, without payment, and without accountability but with a nuclear reactor next to your house.

Artists aren’t raw material. They’re rights-holders under U.S. law. Rewriting those rights to subsidize AI business models isn’t progress—it’s a policy choice to reward theft at scale.

AI can thrive without gutting creative rights. But that requires consent, licensing, and fair compensation—not retroactive immunity dressed up as innovation.

Stealing isn’t innovation. It’s just stealing, with a press strategy.

Find out more at Stealing Isn’t Innovation and @human_artistry

NYT: Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

This image has an empty alt attribute; its file name is image-9.png

The New York Times published a sprawling investigation into David Sacks’s role as Trump’s A.I. and crypto czar. We’ve talked about David Sacks a few times on these pages. The Times’ piece is remarkable in scope and reporting: a venture capitalist inside the White House, steering chip policy, promoting deregulation, raising money for Trump, hosting administration events through his own podcast brand, and retaining hundreds of A.I. and crypto investments that stand to benefit from his policy work.

But for all its detail, the Times buried the lede.

The bigger story isn’t just ethics violations. or outright financial corruption. It’s that Sacks is simultaneously shaping and shielding the largest regulatory power grab in history: the A.I. moratorium and its preemption structure.

Of all the corrupt anecdotes in the New York Times must read article regarding Viceroy and leading Presidential pardon candidate David Sacks, they left out the whole AI moratorium scam, focusing instead on the more garden variety of self-dealing and outright conflicts of interest that are legion. My bet is that Mr. Sacks reeks so badly that it is hard to know what to leave out. Here’s a couple of examples:

This image has an empty alt attribute; its file name is image-10.png

There is a deeper danger that the Times story never addresses: the long-term damage that will outlive David Sacks himself. Even if Sacks eventually faces investigations or prosecution for unrelated financial or securities matters — if he does — the real threat isn’t what happens to him. It’s what happens to the legal architecture he is building right now.

This image has an empty alt attribute; its file name is sacks-american-flag.jpg

If he succeeds in blocking state-law prosecutions and freezing A.I. liability for a decade, the harms won’t stop when he leaves office. They will metastasize.

Without state enforcement, A.I. companies will face no meaningful accountability for:

  • child suicide induced by unregulated synthetic content
  • mass copyright theft embedded into permanent model weights
  • biometric and voiceprint extraction without consent
  • data-center sprawl that overwhelms local water, energy, and zoning systems
  • surveillance architectures exported globally
  • algorithmic harms that cannot be litigated under preempted state laws

These harms don’t sunset when an administration ends. They calcify. It must also be said that Sacks could face state securities-law liability — including fraud, undisclosed self-dealing, and market-manipulative conflicts tied to his A.I. portfolio — because state blue-sky statutes impose duties possibly stricter than federal law. The A.I. moratorium’s preemption would vaporize these claims, shielding exactly the conduct state regulators are best positioned to police. No wonder he’s so committed to sneaking it into federal law.

The moratorium Sacks is pushing would prevent states from acting at the very moment when they are the only entities with the political will and proximity to regulate A.I. on the ground. If he succeeds, the damage will last long after Sacks has left his government role — long after his podcast fades, long after his investment portfolio exits, long after any legal consequences he might face.

The public will be living inside the system he designed.

There is one final point the public needs to understand. DavidSacksis not an anomaly. Sacks is to Trump what Eric Schmidt was to Biden: the industry’s designated emissary, embedded inside the White House to shape federal technology policy from the inside out. Swap the party labels and the personnel change, but the structural function remains the same. Remember, Schmidt bragged about writing the Biden AI executive order.

This image has an empty alt attribute; its file name is of-all-the-ceos-google-interviewed-eric-schmidt-was-the-only-one-that-had-been-to-burning-man-which-was-a-major-plus.jpg

So don’t think that if Sacks is pushed out, investigated, discredited, or even prosecuted one day — if he is — that the problem disappears. You don’t eliminate regulatory capture by removing the latest avatar of it. The next administration will simply install a different billionaire with a different portfolio and the same incentives: protect industry, weaken oversight, preempt the states, and expand the commercial reach of the companies they came in with.

The danger is not David Sacks the individual. The danger is the revolving door that lets tech titans write national A.I. policy while holding the assets that benefit from it. As much as Trump complains of the “deep state,” he’s doing his best to create the deepest of deep states.

Until that underlying structure changes, it won’t matter whether it’s Sacks, Schmidt, Thiel, Musk, Palihapitiya, or the next “technocratic savior.”

The system will keep producing them — and the public will keep paying the price. For as Sophocles taught us, it is not in our power to escape the curse.

@ArtistRights Institute Newsletter 11/17/25: Highlights from a fast-moving week in music policy, AI oversight, and artist advocacy.

American Music Fairness Act

Don’t Let Congress Reward the Stations That Don’t Pay Artists (Editor Charlie/Artist Rights Watch)

Trump AI Executive Order

White House drafts order directing Justice Department to sue states that pass AI regulations (Gerrit De Vynck and Nitasha Tiku/Washington Post)

DOJ Authority and the “Because China” Trump AI Executive Order (Chris Castle/MusicTech.Solutions)

THE @DAVIDSACKS/ADAM THIERER EXECUTIVE ORDER CRUSHING PROTECTIVE STATE LAWS ON AI—AND WHY NO ONE SHOULD BE SURPRISED THAT TRUMP TOOK THE BAIT

Bartz Settlement

WHAT $1.5 BILLION GETS YOU:  AN OBJECTOR’S GUIDE TO THE BARTZ SETTLEMENT (Chris Castle/MusicTechPolicy)

Ticketing

StubHub’s First Earnings Faceplant: Why the Ticket Reseller Probably Should Have Stayed Private (Chris Castle/ArtistRightsWatch)

The UK Finally Moves to Ban Above-Face-Value Ticket Resale (Chris Castle/MusicTech.Solutions)

Ashley King: Oasis Praises Victoria’s Strict Anti-Scalping Laws While on Tour in Oz — “We Can Stop Large-Scale Scalping In Its Tracks” (Artist Rights Watch/Digital Music News)

NMPA/Spotify Video Deal

GUEST POST: SHOW US THE TERMS: IMPLICATIONS OF THE SPOTIFY/NMPA DIRECT AUDIOVISUAL LICENSE FOR INDEPENDENT SONGWRITERS (Gwen Seale/MusicTechPolicy)

WHAT WE KNOW—AND DON’T KNOW—ABOUT SPOTIFY AND NMPA’S “OPT-IN” AUDIOVISUAL DEAL (Chris Castle/MusicTechPolicy)

It’s Back: The National Defense Authorization Act Is No Place for a Backroom AI Moratorium

David Sacks Is Bringing Back the AI Moratorium

WHAT’S AT STAKE

The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.

Ted Turns Texas Glowing

99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:

It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.

Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.

Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.

The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”

Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.

In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).

This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.

The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.

AI Moratorium Lobbying Explainer for Your Electric Bill

Here are the key shilleries pushing the federal AI moratorium and their backers:

Lobby Shop / OrganizationSupporters / FundersRole in Pushing MoratoriumNotes
INCOMPAS / AI Competition Center (AICC)Amazon, Google, Meta, Microsoft, telecom/cloud companiesLeads push for 10-year state-law preemption; argues moratorium prevents ‘patchwork’ lawsIdentified as central industry driver
Consumer Technology Association (CTA)Big Tech, electronics & platform economy firmsLobbying for federal preemption; opposed aggressive state AI lawsHigh influence with Commerce/Appropriations staff
American Edge ProjectMeta-backed advocacy orgFrames preemption as necessary for U.S. competitiveness vs. China; backed moratoriumUsed as indirect political vehicle for Meta
Abundance InstituteTech investors, deregulatory donorsArgues moratorium necessary for innovation; publicly predicts return of moratoriumMessaging aligns with Silicon Valley VCs
R Street InstituteMarket-oriented donors; tech-aligned fundersOriginated ‘learning period’ moratorium concept in 2024 papers by Adam ThiererNot a lobby shop but provides intellectual framework
Corporate Lobbyists (Amazon/Google/Microsoft/Meta/OpenAI/etc.)Internal lobbying shops + outside firmsPromote ‘uniform national standards’ in Congressional meetingsOperate through and alongside trade groups

PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS

The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.

An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.

THE TIMING COULD NOT BE MORE INSULTING

Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.

And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.

States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.

A CALL TO ACTION

Tell your Members of Congress:
No AI moratorium in the NDAA.
No backroom preemption.
No Big Tech giveaways in the defense budget.

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

There Is No ‘Right to Train’: How AI Labs Are Trying to Manufacture a Safe Harbor for Theft

Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about “transformative fair use.” Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will.  That doesn’t mean they won’t keep trying.

A “right to train” is not found anywhere in the Copyright Act or any other law.  It’s also not found in court cases on fair-use that the AI lobby leans on. It’s a slogan and it’s spin, not a statute. What we’re watching is a coordinated effort by the major AI labs to manufacture a safe harbor through litigation — using every favorable fair-use ruling to carve out what looks like a precedent for blanket immunity.  Then they’ll get one of their shills in Congress or a state legislature to introduce legislation as though a “right to train” was there all along.

How the “Right to Train” Narrative Took Shape

The phrase first appeared in tech-industry briefs and policy papers describing model training as a kind of “machine learning fair use.” The logic goes like this: since humans can read a book and learn from it, a machine should be able to “learn” from the same book without permission.

That analogy collapses under scrutiny. First of all, humans typically bought the book they read or checked it out from a library.  Humans don’t make bit-for-bit copies of everything they read, and they don’t reproduce or monetize those copies at global scale. AI training does exactly that — storing expressive works inside model weights, then re-deploying them to generate derivative material.

But the repetitive chant of the term “right to train” serves a purpose: to normalize the idea that AI companies are entitled to scrape, store, and replicate human creativity without consent. Each time a court finds a narrow fair-use defense in a context that doesn’t involve piracy or derivative outputs (because they lose on training on stolen goods like in the Anthropic and Meta cases), the labs and their shills trumpet it as proof that training itself is categorically protected. It isn’t and no court has ever ruled that it is and likely never will.

Fair Use Is Not a Safe Harbor

Fair use is a case-by-case defense to copyright infringement, not a standing permission slip. It weighs purpose, amount, transformation, and market effect — all of which vary depending on the facts. But AI companies are trying to convert that flexible doctrine into a brand new safe harbor: a default assumption that all training is fair use unless proven otherwise.  They love a safe harbor in Silicon Valley and routinely abuse them like Section 230, the DMCA and Title I of the Music Modernization Act.

That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it.  A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word “training.”

Even if courts were to recognize limited fair use for truly lawful training, that protection would never extend to datasets built from pirate websites, torrent mirrors, or unlicensed repositories like Sci-Hub, Z-Library, or Common Crawl’s scraped paywalls—more on the scummy Common Crawl another time. The DMCA’s safe harbors don’t protect platforms that knowingly host stolen goods — and neither would any hypothetical “right to train.”

Yet a safe harbor is precisely what the labs are seeking: a doctrine that would retroactively bless mass infringement like Spotify got in the Music Modernization Act and preempt accountability for the sources they used.  

And not only do they want a safe harbor — they want it for free.  No licenses, no royalties, no dataset audits, no compensation. What do they want?  FREE STUFF.  When do they want it?  NOW!  Just blanket immunity, subsidized by every artist, author, and journalist whose work they ingested without consent or payment.

The Real Motive Behind the Push

The reason AI companies need a “right to train” is simple: without it, they have no reliable legal basis for the data that powers their models and they are too cheap to pay and to careless to take the time to license. Most of their “training corpora” were built years before any licenses were contemplated — scraped from the open web, archives, and pirate libraries under the assumption that no one would notice.

This is particularly important for books.  Training on books is vital for AI models because books provide structured, high-quality language, complex reasoning, and deep cultural context. They teach models coherence, logic, and creativity that short-form internet text lacks. Without books, AI systems lose depth, nuance, and the ability to understand sustained argument, narrative, and style. 

Without books, AI labs have no business.  That’s why they steal books.  Very simple, really.

Now that creators are suing, the labs are trying to reverse-engineer legitimacy. They want to turn each court ruling that nudges fair use in their direction into a brick in the wall of a judicially-manufactured safe harbor — one that Congress never passed and rights-holders never agreed to and would never agree to.

But safe harbors are meant to protect good-faith intermediaries who act responsibly once notified of infringement. AI labs are not intermediaries; they are direct beneficiaries. Their entire business model depends on retaining the stolen data permanently in model weights that cannot be erased.  The “right to train” is not a right — it’s a rhetorical weapon to make theft sound inevitable and a demand from the richest corporations in commercial history for yet another government-sponsored subsidy of infringement by bad actors.

The Myth of the Inevitable Machine

AI’s defenders claim that training on copyrighted works is as natural as human learning. But there’s nothing natural about hoarding other people’s labor at planetary scale and calling it innovation. The truth is simpler: the “right to train” is a marketing term invented to launder unlawful data practices into respectability.

If courts and lawmakers don’t call it what it is — a manufactured, safe harbor for piracy to benefit some of the biggest free riders who ever snarfed down corporate welfare — then history will repeat itself. What Grokster tried to do with distribution, AI is trying to do with cognition: privatize the world’s creative output and claim immunity for the theft.

“You don’t need to train on novels and pop songs to get the benefits of AI in science” @ednewtonrex


You Don’t Need to Steal Art to Cure Cancer: Why Ed Newton-Rex Is Right About AI and Copyright

Ed Newton-Rex said the quiet truth out loud: you don’t need to scrape the world’s creative works to build AI that saves lives. Or even beat the Chinese Communist Party.

It’s a myth that AI “has to” ingest novels and pop lyrics to learn language. Models acquire syntax, semantics, and pragmatics from any large, diverse corpus of natural language. That includes transcribed speech, forums, technical manuals, government documents, Wikipedia, scientific papers, and licensed conversational data. Speech systems learn from audio–text pairs, not necessarily fiction; text models learn distributional patterns wherever language appears. Of course, literary works can enrich style, but they’re not necessary for competence: instruction tuning, dialogue data, and domain corpora yield fluent models without raiding copyrighted art. In short, creative literature is optional seasoning, not the core ingredient for teaching machines to “speak.”

Google’s new cancer-therapy paper proves the point. Their model wasn’t trained on novels, lyrics, or paintings. It was trained responsibly on scientific data. And yet it achieved real, measurable progress in biomedical research. That simple fact dismantles one of Silicon Valley’s most persistent myths: that copyright is somehow an obstacle to innovation.

You don’t need to train on Joni Mitchell to discover a new gene pathway. You don’t need to ingest John Coltrane to find a drug target. AI used for science can thrive within the guardrails of copyright because science itself already has its own open-data ecosystems—peer-reviewed, licensed, and transparent.

The companies like Anthropic and Meta insisting that “fair use” covers mass ingestion of stolen creative works aren’t curing diseases; they’re training entertainment engines. They’re ripping off artists’ livelihoods to make commercial chatbots, story generators, and synthetic-voice platforms designed to compete against the very creators whose works they exploited. That’s not innovation—it’s market capture through appropriation.

They do it for reasons old as time—they do it for the money.

The ethical divide is clear:

  • AI for discovery builds on licensed scientific data.
  • AI for mimicry plunders culture to sell imitation.

We should celebrate the first and regulate the second. Upholding copyright and requiring provenance disclosures doesn’t hinder progress—it restores integrity. The same society that applauds AI in medical breakthroughs can also insist that creative industries remain human-centered and law-abiding. Civil-military fusion doesn’t imply that there’s only two ingredients in the gumbo of life.

If Google can advance cancer research without stealing art, so can everyone else and so can Google keep different rules for the entertainment side of their business or investment portfolio. The choice isn’t between curing cancer and protecting artists—it’s between honesty and opportunism. The repeated whinging of AI labs about “because China” would be a lot more believable if they used their political influence to get the CCP to release Hong Kong activist Jimmy Lai from stir. We can join Jimmy and his amazingly brave son Sebastian and say “because China”, too. #FreeJimmyLai