It’s Back: The National Defense Authorization Act Is No Place for a Backroom AI Moratorium

David Sacks Is Bringing Back the AI Moratorium

WHAT’S AT STAKE

The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.

Ted Turns Texas Glowing

99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:

It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.

Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.

Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.

The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”

Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.

In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).

This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.

The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.

AI Moratorium Lobbying Explainer for Your Electric Bill

Here are the key shilleries pushing the federal AI moratorium and their backers:

Lobby Shop / OrganizationSupporters / FundersRole in Pushing MoratoriumNotes
INCOMPAS / AI Competition Center (AICC)Amazon, Google, Meta, Microsoft, telecom/cloud companiesLeads push for 10-year state-law preemption; argues moratorium prevents ‘patchwork’ lawsIdentified as central industry driver
Consumer Technology Association (CTA)Big Tech, electronics & platform economy firmsLobbying for federal preemption; opposed aggressive state AI lawsHigh influence with Commerce/Appropriations staff
American Edge ProjectMeta-backed advocacy orgFrames preemption as necessary for U.S. competitiveness vs. China; backed moratoriumUsed as indirect political vehicle for Meta
Abundance InstituteTech investors, deregulatory donorsArgues moratorium necessary for innovation; publicly predicts return of moratoriumMessaging aligns with Silicon Valley VCs
R Street InstituteMarket-oriented donors; tech-aligned fundersOriginated ‘learning period’ moratorium concept in 2024 papers by Adam ThiererNot a lobby shop but provides intellectual framework
Corporate Lobbyists (Amazon/Google/Microsoft/Meta/OpenAI/etc.)Internal lobbying shops + outside firmsPromote ‘uniform national standards’ in Congressional meetingsOperate through and alongside trade groups

PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS

The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.

An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.

THE TIMING COULD NOT BE MORE INSULTING

Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.

And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.

States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.

A CALL TO ACTION

Tell your Members of Congress:
No AI moratorium in the NDAA.
No backroom preemption.
No Big Tech giveaways in the defense budget.

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

Artist Rights Are Innovation, Too! White House Opens AI Policy RFI and Artists Should Be Heard

The White House has opened a major Request for Information (RFI) on the future of artificial intelligence regulation — and anyone can submit a comment. That means you. This is not just another government exercise. It’s a real opportunity for creators, musicians, songwriters, and artists to make their voices heard in shaping the laws that will govern AI and its impact on culture for decades to come.

Too often, artists find out about these processes after the decisions are already made. This time, we don’t have to be left out. The comment period is open now, and you don’t need to be a lawyer or a lobbyist to participate — you just need to care about the future of your work and your rights. Remember—property rights are innovation, too, just ask Hernando de Soto (Mystery of Capital) or any honest economist.

Here are four key issues in the RFI that matter deeply to artists — and why your voice is critical on each:


1. Transparency and Provenance: Artists Deserve to Know When Their Work Is Used

One of the most important questions in the RFI asks how AI companies should document and disclose the creative works used to train their models. Right now, most platforms hide behind trade secrets and refuse to reveal what they ingested. For artists, that means you might never know if your songs, photographs, or writing were taken without permission — even if they now power billion-dollar AI products.

This RFI is a chance to demand real provenance requirements: records of what was used, when, and how. Without this transparency, artists cannot protect their rights or seek compensation. A strong public record of support for provenance could shape future rules and force platforms into accountability.


2. Derivative Works and AI Memory: Creativity Shouldn’t Be Stolen Twice

The RFI also raises a subtle but crucial issue: even if companies delete unauthorized copies of works from their training sets, the models still retain and exploit those works in their weights and “memory.” This internal use is itself a derivative work — and it should be treated as one under the law.

Artists should urge regulators to clarify that training outputs and model weights built from copyrighted material are not immune from copyright. This is essential to closing a dangerous loophole: without it, platforms can claim to “delete” your work while continuing to profit from its presence inside their AI systems.


3. Meaningful Opt-Out: Creators Must Control How Their Work Is Used

Another critical question is whether creators should have a clear, meaningful opt-out mechanism that prevents their work from being used in AI training or generation without permission. As Artist Rights Institute and many others have demonstrated, “Robots.txt” disclaimers buried in obscure places are not enough. Artists need a legally enforceable system—not another worthless DMCA-style notice and notice and notice and notice and notice and maybe takedown system that platforms must respect and that regulators can audit.

A robust opt-out system would restore agency to creators, giving them the ability to decide if, when, and how their work enters AI pipelines. It would also create pressure on companies to build legitimate licensing systems rather than relying on theft.


4. Anti-Piracy Rule: National Security Is Not a License to Steal

Finally, the RFI invites comment on how national priorities should shape AI development and it’s vital that artists speak clearly here. There must be a bright-line rule that training AI models on pirated content is never excused by national security or “public interest” arguments. This is a real thing—pirate libraries are clearly front and center in AI litigation which have largely turned into piracy cases because the AI lab “national champions” steal books and everything else.

If a private soldier stole a carton of milk from a chow hall, he’d likely lose his security clearance. Yet some AI companies have built entire models on stolen creative works and now argue that government contracts justify their conduct. That logic is backwards. A nation that excuses intellectual property theft in the name of “security” corrodes the rule of law and undermines the very innovation it claims to protect. On top of it, the truth of the case is that the man Zuckerberg is a thief, yet he is invited to dinner at the White House.

A clear anti-piracy rule would ensure that public-private partnerships in AI development follow the same legal and ethical standards we expect of every citizen — and that creators are not forced to subsidize government technology programs with uncompensated labor. Any “AI champion” who steals should lose or be denied a security clearance.


Your Voice Matters — Submit a Comment

The White House needs to hear directly from creators — not just from tech companies and trade associations. Comments from artists, songwriters, and creative professionals will help shape how regulators understand the stakes and set the boundaries.

You don’t need legal training to submit a comment. Speak from your own experience: how unauthorized use affects your work, why transparency matters, what a meaningful opt-out would look like, and why piracy can never be justified by national security.

👉 Submit your comment here before the October 27 deadline.

Senator Josh @HawleyMO Throws Down on Big Tech’s Copyright Theft

 I believe Americans should have the ability to defend their human data, and their rights to that data, against the largest copyright theft in the history of the world. 

Millions of Americans have spent the past two decades speaking and engaging online. Many of you here today have online profiles and writings and creative productions that you care deeply about. And rightly so. It’s your work. It’s you.

What if I told you that AI models have already been trained on enough copyrighted works to fill the Library of Congress 22 times over? For me, that makes it very simple: We need a legal mechanism that allows Americans to freely defend those creations. I say let’s empower human beings by protecting the very human data they create. Assign property rights to specific forms of data, create legal liability for the companies who use that data and, finally, fully repeal Section 230. Open the courtroom doors. Let the people sue those who take their rights, including those who do it using AI.

Third, we must add sensible guardrails to the emergent AI economy and hold concentrated economic power to account. These giant companies have made no secret of their ambitions to radically reshape our economic life. So, we ought to require transparency and reporting each time they replace a working man with a machine.

And the government should inspect all of these frontier AI systems, so we can better understand what the tech titans plan to build and deploy. 

Ultimately, when it comes to guardrails, protecting our children should be our lodestar. You may have seen recently how Meta green-lit its own chatbots to have sensual conversations with children—yes, you heard me right. Meta’s own internal documents permitted lurid conversations that no parent would ever contemplate. And most tragically, ChatGPT recently encouraged a troubled teenager to commit suicide—even providing detailed instructions on how to do it.

We absolutely must require and enforce rigorous technical standards to bar inappropriate or harmful interactions with minors. And we should think seriously about age verification for chatbots and agents. We don’t let kids drive or drink or do a thousand other harmful things. The same standards should apply to AI.

Fourth and finally, while Congress gets its act together to do all of this, we can’t kneecap our state governments from moving first. Some of you may have seen that there was a major effort in Congress to ban states from regulating AI for 10 years—and a whole decade is an eternity when it comes to AI development and deployment. This terrible policy was nearly adopted in the reconciliation bill this summer, and it could have thrown out strong anti-porn and child online safety laws, to name a few. Think about that: conservatives out to destroy the very concept of federalism that they cherish … all in the name of Big Tech. Well, we killed it on the Senate floor. And we ought to make sure that bad idea stays dead.

We’ve faced technological disruption before—and we’ve acted to make technology serve us, the people. Powered flight changed travel forever, but you can’t land a plane on your driveway. Splitting the atom fundamentally changed our view of physics, but nobody expects to run a personal reactor in their basement. The internet completely recast communication and media, but YouTube will still take down your video if you violate a copyright. By the same token, we can—and we should—demand that AI empower Americans, not destroy their rights . . . or their jobs . . . or their lives.

Don’t forget tomorrow—Artist Rights Roundtable on AI and Copyright at American University in Washington DC

Artist Rights Roundtable on AI and Copyright: 
Coffee with Humans and the Machines     

Join the Artist Rights Institute (ARI) and American University’s Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

This roundtable is particularly timely because both the Bartz and Kadrey rulings expose gaps in author consent, provenance, and fair licensing, underscoring an urgent need for policy, identifiers, and enforceable frameworks to protect creators.

 🗓️ Date: September 18, 2025
🕗 Time: 8:00 a.m. – 12:00 noon
📍 Location: Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016

🎟️ Admission: Free and open to the public. Registration required at Eventbrite. Seating is limited.

🅿️ Parking map is available here. Pay-As-You-Go parking is available in hourly or daily increments ($2/hour, or $16/day) using the pay stations in the elevator lobbies of Katzen Arts Center, East Campus Surface Lot, the Spring Valley Building, Washington College of Law, and the School of International Service

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by KOGOD Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:

  • Dr. Moiya McTier, Senior Advisor, Human Artistry Campaign
  • Ryan Lehnning, Assistant General Counsel, International at SoundExchange
  • The Chatbot

Moderator: Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation

  • Speaker: Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:

  • Erin McAnally, Executive Director, Songwriters of North America
  • Jen Jacobsen, Executive Director, Artist Rights Alliance
  • Josh Hurvitz, Partner, NVG and Head of Advocacy for A2IM
  • Kevin Amer, Chief Legal Officer, The Authors Guild

Moderator: Linda Bloss-Baum, Director, Business and Entertainment Program, KOGOD School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

  • Speaker: George York, SVP, International Policy Recording Industry Association of America

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch this space and visit Eventbrite for updates and speaker announcements.

@RickBeato on AI Artists

Is it at thing or is it disco? Our fave Rick Beato has a cautionary tale in this must watch video: AI can mimic but not truly create art. As generative tools get more prevalent, he urges thoughtful curation, artist-centered policies, and an emphasis on emotionally rich, human-driven creativity–also known as creativity. h/t Your Morning Coffee our favorite podcast.

@Unite4Copyright: Say “No” to Unlicensed AI Training

The biggest of Big Tech are scraping everything they can snarf down to train their AI–that means your Facebook, Instagram, YouTube, websites, Reddit, the works. Congress has to stop this–if. you are as freaked out about this as we are, join in the Copyright Alliance letter campaign here. It just takes a minute to send a personalized letter to Congress and the White House urging policymakers to protect creators’ rights and ensure fair compensation in the AI era.

@human_artistry Campaign Letter Opposing AI Safe Harbor Moratorium in Big Beautiful Bill HR 1

Artist Rights Institute is pleased to support the Human Artistry Campaign’s letter to Senators Thune and Schumer opposing the AI safe harbor in the One Big Beautiful Bill Act. ARI joins with:

Opposition is rooted in the most justifiable reasons:

By wiping dozens of state laws off the books, the bill would undermine public safety, creators’ rights, and the ability of local communities to protect themselves from a fast-moving technology that is being rushed to the market by tech giants. State laws protecting people from invasive AI deepfakes would be at risk, along with a range of proposals designed to eliminate discrimination and bias in AI. For artists and creators, preempting state laws requiring Big tech to disclose the material they used to train their models, often to create new products that compete with the human creators’ originals, would make it difficult or impossible to prove this theft has occurred. As the Copyright Office’s Fair Use Report recently reaffirmed, many forms of this conduct are illegal under longstanding federal law. 

The moratorium is so vague that it is unclear whether it would actually prohibit states from addressing construction of data centers or the vast drain on the power grid to implement AI placement in states. This is a safe harbor on steroids and terrible for all creators.

Martina McBride’s Plea for Artist Protection from AI Met with a Congressional Sleight of Hand

This week, country music icon Martina McBride poured her heart out before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Her testimony in support of the bipartisan NO FAKES Act was raw, earnest, and courageous. Speaking as an artist, a mother, and a citizen, she described the emotional weight of having her voice—one that has offered solace and strength to survivors of domestic violence—exploited by AI systems to peddle messages she would never endorse. Her words echoed through the chamber with moral clarity: “Give me the tools to stop that kind of betrayal.”

The NO FAKES Act aims to create a federal property right over an individual’s name, image, and likeness (NIL), offering victims of AI-generated deepfakes a meaningful path to justice. The bill has drawn bipartisan support and commendation from artists’ rights advocates, child protection organizations, and even some technology companies. It represents a sincere attempt to preserve human dignity in the age of machine mimicry.

And yet, while McBride testified in defense of authenticity and integrity, Congress was quietly advancing legislation that was the opposite.

At the same time her testimony was being heard, lawmakers were moving forward with a massive federal budget package ironically called the “Big Beautiful Bill” that includes an AI safe harbor moratorium—a sweeping provision that would strip states of their ability to enforce NIL protections against AI through existing state laws. The so-called “AI Safe Harbor” effectively immunizes AI developers from accountability under most current state-level right-of-publicity and privacy laws, not to mention wire fraud, wrongful death and RICO. It does so in the name of “innovation,” but at the cost of silencing local democratic safeguards and creators of all categories.

Worse yet, the economic scoring of the “Big Beautiful Bill” is based on economic assumptions that rely on productivity gains from AI ripping off all creators from grandma’s baby pictures to rock stars.

The irony is devastating. Martina McBride’s call for justice was sincere and impassioned. But the AI moratorium hanging over the very same legislative session would make it harder—perhaps impossible—for states like Florida, Tennessee, Texas, or California to shield their citizens from the very abuses McBride described. The same Congress that applauded her courage is in the process of handing Silicon Valley a blank check to continue the vulpine lust of its voracious scraping and synthetic exploitation of human expression.

This is not just hypocrisy; it’s the personification of Washington’s two-faced AI policy. On one hand, ceremonial hearings and soaring rhetoric. On the other, buried provisions that serve the interests of the most powerful AI platforms in the world. Oh, and the AI platforms also wrote themselves into the pork fest for $500,000,000 of taxpayers money (more likely debt) for “AI modernization” whatever that is. At a time that the bond market is about to dump all over the U.S. economy. Just another day in the Imperial City.

Let’s be honest: the AI safe harbor moratorium isn’t about protecting innovation. It’s about protecting industrialized theft. It codifies a grotesque and morbid fascination with digital kleptomania—a fetish for the unearned, the repackaged, the replicated.

In that sense, the AI Safe Harbor doesn’t just threaten artists. It perfectly embodies the twisted ethos of modern Silicon Valley, a worldview most grotesquely illustrated by the image of a drooling Sam Altman—the would-be godfather of generative AI—salivating over the limitless data he believes he has a divine right to mine.

Martina McBride called for justice. Congress listened politely. And then gave her to the wolves.

They have a chance to make it right—starting with stripping the radical and extreme safe harbor from the “Big Beautiful Bill.”

[This post first appeared on MusicTechPolicy]