@ArtistRights Institute Newsletter 11/17/25: Highlights from a fast-moving week in music policy, AI oversight, and artist advocacy.

American Music Fairness Act

Don’t Let Congress Reward the Stations That Don’t Pay Artists (Editor Charlie/Artist Rights Watch)

Trump AI Executive Order

White House drafts order directing Justice Department to sue states that pass AI regulations (Gerrit De Vynck and Nitasha Tiku/Washington Post)

DOJ Authority and the “Because China” Trump AI Executive Order (Chris Castle/MusicTech.Solutions)

THE @DAVIDSACKS/ADAM THIERER EXECUTIVE ORDER CRUSHING PROTECTIVE STATE LAWS ON AI—AND WHY NO ONE SHOULD BE SURPRISED THAT TRUMP TOOK THE BAIT

Bartz Settlement

WHAT $1.5 BILLION GETS YOU:  AN OBJECTOR’S GUIDE TO THE BARTZ SETTLEMENT (Chris Castle/MusicTechPolicy)

Ticketing

StubHub’s First Earnings Faceplant: Why the Ticket Reseller Probably Should Have Stayed Private (Chris Castle/ArtistRightsWatch)

The UK Finally Moves to Ban Above-Face-Value Ticket Resale (Chris Castle/MusicTech.Solutions)

Ashley King: Oasis Praises Victoria’s Strict Anti-Scalping Laws While on Tour in Oz — “We Can Stop Large-Scale Scalping In Its Tracks” (Artist Rights Watch/Digital Music News)

NMPA/Spotify Video Deal

GUEST POST: SHOW US THE TERMS: IMPLICATIONS OF THE SPOTIFY/NMPA DIRECT AUDIOVISUAL LICENSE FOR INDEPENDENT SONGWRITERS (Gwen Seale/MusicTechPolicy)

WHAT WE KNOW—AND DON’T KNOW—ABOUT SPOTIFY AND NMPA’S “OPT-IN” AUDIOVISUAL DEAL (Chris Castle/MusicTechPolicy)

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

There Is No ‘Right to Train’: How AI Labs Are Trying to Manufacture a Safe Harbor for Theft

Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about “transformative fair use.” Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will.  That doesn’t mean they won’t keep trying.

A “right to train” is not found anywhere in the Copyright Act or any other law.  It’s also not found in court cases on fair-use that the AI lobby leans on. It’s a slogan and it’s spin, not a statute. What we’re watching is a coordinated effort by the major AI labs to manufacture a safe harbor through litigation — using every favorable fair-use ruling to carve out what looks like a precedent for blanket immunity.  Then they’ll get one of their shills in Congress or a state legislature to introduce legislation as though a “right to train” was there all along.

How the “Right to Train” Narrative Took Shape

The phrase first appeared in tech-industry briefs and policy papers describing model training as a kind of “machine learning fair use.” The logic goes like this: since humans can read a book and learn from it, a machine should be able to “learn” from the same book without permission.

That analogy collapses under scrutiny. First of all, humans typically bought the book they read or checked it out from a library.  Humans don’t make bit-for-bit copies of everything they read, and they don’t reproduce or monetize those copies at global scale. AI training does exactly that — storing expressive works inside model weights, then re-deploying them to generate derivative material.

But the repetitive chant of the term “right to train” serves a purpose: to normalize the idea that AI companies are entitled to scrape, store, and replicate human creativity without consent. Each time a court finds a narrow fair-use defense in a context that doesn’t involve piracy or derivative outputs (because they lose on training on stolen goods like in the Anthropic and Meta cases), the labs and their shills trumpet it as proof that training itself is categorically protected. It isn’t and no court has ever ruled that it is and likely never will.

Fair Use Is Not a Safe Harbor

Fair use is a case-by-case defense to copyright infringement, not a standing permission slip. It weighs purpose, amount, transformation, and market effect — all of which vary depending on the facts. But AI companies are trying to convert that flexible doctrine into a brand new safe harbor: a default assumption that all training is fair use unless proven otherwise.  They love a safe harbor in Silicon Valley and routinely abuse them like Section 230, the DMCA and Title I of the Music Modernization Act.

That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it.  A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word “training.”

Even if courts were to recognize limited fair use for truly lawful training, that protection would never extend to datasets built from pirate websites, torrent mirrors, or unlicensed repositories like Sci-Hub, Z-Library, or Common Crawl’s scraped paywalls—more on the scummy Common Crawl another time. The DMCA’s safe harbors don’t protect platforms that knowingly host stolen goods — and neither would any hypothetical “right to train.”

Yet a safe harbor is precisely what the labs are seeking: a doctrine that would retroactively bless mass infringement like Spotify got in the Music Modernization Act and preempt accountability for the sources they used.  

And not only do they want a safe harbor — they want it for free.  No licenses, no royalties, no dataset audits, no compensation. What do they want?  FREE STUFF.  When do they want it?  NOW!  Just blanket immunity, subsidized by every artist, author, and journalist whose work they ingested without consent or payment.

The Real Motive Behind the Push

The reason AI companies need a “right to train” is simple: without it, they have no reliable legal basis for the data that powers their models and they are too cheap to pay and to careless to take the time to license. Most of their “training corpora” were built years before any licenses were contemplated — scraped from the open web, archives, and pirate libraries under the assumption that no one would notice.

This is particularly important for books.  Training on books is vital for AI models because books provide structured, high-quality language, complex reasoning, and deep cultural context. They teach models coherence, logic, and creativity that short-form internet text lacks. Without books, AI systems lose depth, nuance, and the ability to understand sustained argument, narrative, and style. 

Without books, AI labs have no business.  That’s why they steal books.  Very simple, really.

Now that creators are suing, the labs are trying to reverse-engineer legitimacy. They want to turn each court ruling that nudges fair use in their direction into a brick in the wall of a judicially-manufactured safe harbor — one that Congress never passed and rights-holders never agreed to and would never agree to.

But safe harbors are meant to protect good-faith intermediaries who act responsibly once notified of infringement. AI labs are not intermediaries; they are direct beneficiaries. Their entire business model depends on retaining the stolen data permanently in model weights that cannot be erased.  The “right to train” is not a right — it’s a rhetorical weapon to make theft sound inevitable and a demand from the richest corporations in commercial history for yet another government-sponsored subsidy of infringement by bad actors.

The Myth of the Inevitable Machine

AI’s defenders claim that training on copyrighted works is as natural as human learning. But there’s nothing natural about hoarding other people’s labor at planetary scale and calling it innovation. The truth is simpler: the “right to train” is a marketing term invented to launder unlawful data practices into respectability.

If courts and lawmakers don’t call it what it is — a manufactured, safe harbor for piracy to benefit some of the biggest free riders who ever snarfed down corporate welfare — then history will repeat itself. What Grokster tried to do with distribution, AI is trying to do with cognition: privatize the world’s creative output and claim immunity for the theft.

@DanMilmo: Top UK artists urge Starmer to protect their work on eve of Trump visit

UK artists including Paul McCartney, Kate Bush and Elton John urged Prime Minister Keir Starmer to protect creators before a UK-US tech pact tied to President Donald Trump’s visit. In a letter, they accuse Labour of blocking transparency rules that would force AI firms to disclose training data and warn proposals enabling training on copyrighted works without permission could let an artist’s life’s work be stolen. Citing human rights documents like the International Covenant on Economic, Social and Cultural Rights, the Berne convention and the European Convention on Human Rights, they frame the issue as a human-rights breach. Peer Beeban Kidron criticised US-heavy working groups. Government says no decision yet and promises a report by March. 

Read the post on The Guardian

Martina McBride’s Plea for Artist Protection from AI Met with a Congressional Sleight of Hand

This week, country music icon Martina McBride poured her heart out before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Her testimony in support of the bipartisan NO FAKES Act was raw, earnest, and courageous. Speaking as an artist, a mother, and a citizen, she described the emotional weight of having her voice—one that has offered solace and strength to survivors of domestic violence—exploited by AI systems to peddle messages she would never endorse. Her words echoed through the chamber with moral clarity: “Give me the tools to stop that kind of betrayal.”

The NO FAKES Act aims to create a federal property right over an individual’s name, image, and likeness (NIL), offering victims of AI-generated deepfakes a meaningful path to justice. The bill has drawn bipartisan support and commendation from artists’ rights advocates, child protection organizations, and even some technology companies. It represents a sincere attempt to preserve human dignity in the age of machine mimicry.

And yet, while McBride testified in defense of authenticity and integrity, Congress was quietly advancing legislation that was the opposite.

At the same time her testimony was being heard, lawmakers were moving forward with a massive federal budget package ironically called the “Big Beautiful Bill” that includes an AI safe harbor moratorium—a sweeping provision that would strip states of their ability to enforce NIL protections against AI through existing state laws. The so-called “AI Safe Harbor” effectively immunizes AI developers from accountability under most current state-level right-of-publicity and privacy laws, not to mention wire fraud, wrongful death and RICO. It does so in the name of “innovation,” but at the cost of silencing local democratic safeguards and creators of all categories.

Worse yet, the economic scoring of the “Big Beautiful Bill” is based on economic assumptions that rely on productivity gains from AI ripping off all creators from grandma’s baby pictures to rock stars.

The irony is devastating. Martina McBride’s call for justice was sincere and impassioned. But the AI moratorium hanging over the very same legislative session would make it harder—perhaps impossible—for states like Florida, Tennessee, Texas, or California to shield their citizens from the very abuses McBride described. The same Congress that applauded her courage is in the process of handing Silicon Valley a blank check to continue the vulpine lust of its voracious scraping and synthetic exploitation of human expression.

This is not just hypocrisy; it’s the personification of Washington’s two-faced AI policy. On one hand, ceremonial hearings and soaring rhetoric. On the other, buried provisions that serve the interests of the most powerful AI platforms in the world. Oh, and the AI platforms also wrote themselves into the pork fest for $500,000,000 of taxpayers money (more likely debt) for “AI modernization” whatever that is. At a time that the bond market is about to dump all over the U.S. economy. Just another day in the Imperial City.

Let’s be honest: the AI safe harbor moratorium isn’t about protecting innovation. It’s about protecting industrialized theft. It codifies a grotesque and morbid fascination with digital kleptomania—a fetish for the unearned, the repackaged, the replicated.

In that sense, the AI Safe Harbor doesn’t just threaten artists. It perfectly embodies the twisted ethos of modern Silicon Valley, a worldview most grotesquely illustrated by the image of a drooling Sam Altman—the would-be godfather of generative AI—salivating over the limitless data he believes he has a divine right to mine.

Martina McBride called for justice. Congress listened politely. And then gave her to the wolves.

They have a chance to make it right—starting with stripping the radical and extreme safe harbor from the “Big Beautiful Bill.”

[This post first appeared on MusicTechPolicy]

Massive State Opposition to AI Regulation Safe Harbor Moratorium in the ‘One Big Beautiful Bill Act’ (H. Con. Res. 14)

As of this morning, the loathsome AI Safe Harbor is still in the ‘One Big Beautiful Bill Act’ (H. Con. Res. 14) as far as we can tell. There is supposed to be a “manager’s amendment” released this morning yet which will include any changes that were made in last night’s late night session. Watch this space at House Rules for when that managers amendment is released. You will be looking for Section 43201 around page 292. As far as we can tell, the safe harbor is still in there, which is not surprising as it is coming from David Sacks the Silicon Valley Viceroy and White House Crypto Czar.

But the safe harbor—a 10-year moratorium on state and local regulation of artificial intelligence (AI)—has ignited significant opposition from a broad coalition of state officials, lawmakers, and organizations. As you would expect, opponents argue that this measure would undermine existing protections and hinder the ability of states to address AI-related harms. Despite the fact that it was snuck through in the middle of the night, opposition is increasing all the time, but we cannot relent for a moment as Silicon Valley is at it again and wants to hang an AI safe harbor in the lobbyists hunting lodge, right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

State-Level Opposition

A bipartisan group of 40 state attorneys general have voiced strong opposition through NAAG to the AI regulation moratorium. In a letter to Congress, they emphasized that the moratorium would disrupt hundreds of measures being both considered by state legislatures and those that have already passed in states led by Republicans and Democrats. They argue that, in the absence of comprehensive federal AI legislation, states have been at the forefront of protecting consumers.

Organizational Opposition

Beyond state officials, a coalition of 141 organizations—including unions, advocacy groups, non-profits, and academic institutions—have expressed alarm over the proposed safe harbor moratorium. In a letter to Congressional leaders, they warned that the provision could lead to unfettered abuse of AI technologies, undermining critical safeguards such as civil rights protections, privacy standards, and accountability for harmful AI applications.

Notable organizations opposing the moratorium include:

  • Alphabet Workers Union
  • Amazon Employees for Climate Justice
  • Mozilla
  • American Federation of Teachers
  • Center for Democracy and Technology 

We don’t ask you pick up the phone and call your Congress representative very often, but this is one of those times. If you’re not sure who your representatives are, you can go to the House of Representatives website here and look at the upper right hand corner for this box:

You can also go to the 5calls webpage opposing the safe harbor moratorium which is here. They have developed some collateral and talking points for you to draw on if you like.

This is a big damn deal. Let’s get it done. We’ve all done it before, let’s do it again.