Big Beautiful AI Safe Harbor asks If David Sacks wants to Make America Screwed Again?

In a dramatic turn of events, Congress is quietly advancing a 10-year federal safe harbor for Big Tech that would block any state and local regulation of artificial intelligence (AI). That safe harbor would give Big Tech another free ride on the backs of artists, authors, consumers, all of us and our children. It would stop cold the enforcement of state laws to protect consumers like the $1.370 billion dollar settlement Google reached with the State of Texas last week for grotesque violations of user privacy. The bill would go up on Big Tech’s trophy wall right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

Introduced through the House Energy and Commerce Committee as part of a broader legislative package branded with President Trump’s economic agenda, this safe harbor would prevent states from enforcing or enacting any laws that address the development, deployment, or oversight of AI systems. While couched as a measure to ensure national uniformity and spur innovation, this proposal carries serious consequences for consumer protection, data privacy, and state sovereignty. It threatens to erase hard-fought state-level protections that shield Americans from exploitative child snooping, data scraping, biometric surveillance, and the unauthorized use of personal and all creative works. This post unpacks how we got here, why it matters, and what can still be done to stop it.

The Origins of the New Safe Harbor

The roots of the latest AI safe harbor lie in a growing push from Silicon Valley-aligned political operatives and venture capital influencers, many of whom fear a patchwork of state-level consumer protection laws that would stop AI data scraping. Among the most vocal proponents is tech entrepreneur-turned White House crypto czar David Sacks, who has advocated for federal preemption of state AI rules in order to protect startup innovation from what he and others call regulatory overreach also known as state “police powers” to protect state residents.

If my name was “Sacks” I’d probably be a bit careful about doing things that could get me fired. His influence reportedly played a role in shaping the safe harbor’s timing and language, leveraging connections on Capitol Hill to attach it to a larger pro-business package of legislation. That package—marketed as a pillar of President Trump’s economic plan—was seen as a convenient vehicle to slip through controversial provisions with minimal scrutiny. You know, let’s sneak one past the boss.

Why This Is Dangerous for Consumers and Creators

The most immediate danger of the AI safe harbor is its preemption of state protections at a time when AI technologies are accelerating unchecked. States like California, Illinois, and Virginia have enacted—or are considering—laws to limit how companies use AI to analyze facial features, scan emails, extract audio, or mine creative works from social media. The AI mantra is that they can snarf down “publicly available data” which essentially means everything that’s not behind a paywall. Because there is no federal AI regulation yet, state laws are crucial for protecting vulnerable populations, including children whose photos and personal information are shared by parents online. Under the proposed AI safe harbor, such protections would be nullified for 10 years–and don’t think it won’t be renewed.

Without the ability to regulate AI at the state level, we could see our biometric data harvested without consent. Social media posts—including photos of babies, families, and school events—could be scraped and used to train commercial AI systems without transparency or recourse. Creators across all copyright categories could find their works ingested into large language models and generative tools without license or attribution. Emails and other personal communications could be fed into AI systems for profiling, advertising, or predictive decision-making without oversight.

While federal regulation of AI is certainly coming this AI safe harbor includes no immediate substitute. Instead, it freezes state level regulatory development entirely for a decade—an eternity in the technology world—during which time the richest companies in the history of commerce can entrench themselves further with little fear of accountability. And it likely will provide a blueprint for federal legislation when it comes.

A Strategic Misstep for Trump’s Economic Agenda: Populism or Make America Screwed Again?

Ironically, attaching the moratorium to a legislative package meant to symbolize national renewal may ultimately undermine the very populist and sovereignty-based themes that President Trump has championed. By insulating Silicon Valley firms from state scrutiny, the legislation effectively prioritizes the interests of data-rich corporations over the privacy and rights of ordinary Americans. It hands a victory to unelected tech executives and undercuts the authority of governors, state legislators, and attorney generals who have stepped in where federal law has lagged behind. So much for that states are “laboratories of democracy” jazz.

Moreover, the manner in which the safe harbor was advanced legislatively—slipped into what is supposed to be a reconciliation bill without extensive hearings or stakeholder input—is classic pork and classic Beltway maneuvering in smoke filled rooms. Critics from across the political spectrum have noted that such tactics cheapen the integrity of any legislation they touch and reflect the worst of Washington horse-trading.

What Can Be Done to Stop It

The AI safe harbor is not a done deal. There are several procedural and political tools available to block or remove it from the broader legislative package.

1. Committee Intervention – Lawmakers on the House Energy and Commerce Committee or the Rules Committee can offer amendments to strip or revise the moratorium before it proceeds to the full House.
2. House Floor Action – Opponents of the moratorium can offer floor amendments during debate to strike the provision. This requires coordination and support from members across both parties.
3. Senate “Byrd Rule” Challenge and Holds – Because reconciliation bills must be budget-related, the Senate Parliamentarian can strike the safe harbor if it’s deemed “non-germane” which it certainly seems to be. Senators can formally raise this challenge.
4. Conference Committee Negotiation – If different versions of the legislation pass the House and Senate, the final language will be hashed out in conference. There is still time to remove the moratorium here.
5. Public Advocacy – Artists, parents, consumer advocates, and especially state officials can apply pressure through media, petitions, and direct outreach to lawmakers, highlighting the harms and democratic risks of federal preemption. States may be able to sue to block the safe harbor as unconstitutional (see Chris’s discussion of constitutionality) but let’s not wait to get to that point. It must be said that any such litigation poses a threat to Trump’s “Big Beautiful Bill” courtesy of David Sacks.

Conclusion

The AI safe harbor may have been introduced quietly, but there’s a growing backlash from all corners. Its consequences would be anything but subtle. If enacted, it would freeze innovation in AI accountability, strip states of their ability to protect residents, and expose Americans to widespread digital exploitation. While marketed as pro-innovation, the safe harbor looks more like a gift to data-hungry monopolies at the expense of federalist principles and individual rights.

It’s not too late to act, but doing so requires vigilance, transparency, and an insistence that even the most powerful Big Tech oligarchs remain subject to democratic oversight.

@ArtistRights Newsletter 4/14/25

The Artist Rights Watch podcast returns for another season! This week’s episode features AI Legislation, A View from Europe: Helienne Lindvall, President of the European Composer and Songwriter Alliance (ECSA) and ARI Director Chris Castle in conversation regarding current issues for creators regarding the EU AI Act and the UK Text and Data Mining legislation. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

AI Litigation: Kadrey v. Meta

Law Professors Reject Meta’s Fair Use Defense in Friend of the Court Brief

Ticketing
Viagogo failing to prevent potentially unlawful practices, listings on resale site suggest that scalpers are speculatively selling tickets they do not yet have (Rob Davies/The Guardian)

ALEC Astroturf Ticketing Bill Surfaces in North Carolina Legislation

ALEC Ticketing Bill Surfaces in Texas to Rip Off Texas Artists (Chris Castle/MusicTechPolicy)

International AI Legislation

Brazil’s AI Act: A New Era of AI Regulation (Daniela Atanasovska and Lejla Robeli/GDPR Local)

Why robots.txt won’t get it done for AI Opt Outs (Chris Castle/MusicTechPolicy)

Feature TranslationHow has the West’s misjudgment of China’s AI ecosystem distorted the global technology competition landscape (Jeffrey Ding/ChinAI)

Unethical AI Training Harms Creators and Society, Argues AI Pioneer (Ed Nawotka/Publishers Weekly) 

AI Ethics

Céline Dion Calls Out AI-Generated Music Claiming to Feature the Iconic Singer Without Her Permission (Marina Watts/People)

Splice CEO Discusses Ethical Boundaries of AI in Music​ (Nilay Patel/The Verge)

Spotify’s Bold AI Gamble Could Disrupt The Entire Music Industry (Bernard Marr/Forbes)

Books

Apple in China: The Capture of the World’s Greatest Company by Patrick McGee (Coming May 13)

PRESS RELEASE: @Human_Artistry Campaign Endorses NO FAKES Act to Protect Personhood from AI

For Immediate Release

HUMAN ARTISTRY CAMPAIGN ENDORSES NO FAKES ACT

Bipartisan Bill Reintroduced by Senators Blackburn, Coons, Tillis, & Klobuchar and Representatives Salazar, Dean, Moran, Balint and Colleagues

Create New Federal Right for Use of Voice and Visual Likeness
in Digital Replicas

Empowers Artists, Voice Actors, and Individual Victims to Fight Back Against
AI Deepfakes and Voice Clones

WASHINGTON, DC (April 9, 2025) – Amid global debate over guardrails needed for AI, the Human Artistry Campaign today announced its support for the reintroduced “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2025” (“NO FAKES Act”) – landmark legislation giving every person an enforceable new federal intellectual property right in their image and voice. 

Building off the original NO FAKES legislation introduced last Congress, the updated bill was reintroduced today by Senators Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC), Amy Klobuchar (D-MN) alongside Representatives María Elvira Salazar (R-FL-27), Madeleine Dean (D-PA-4), Nathaniel Moran (R-TX-1), and Becca Balint (D-VT-At Large) and bipartisan colleagues.

The legislation sets a strong federal baseline protecting all Americans from invasive AI-generated deepfakes flooding digital platforms today. From young students bullied by non-consensual sexually explicit deepfakes to families scammed by voice clones to recording artists and performers replicated to sing or perform in ways they never did, the NO FAKES Act provides powerful remedies requiring platforms to quickly take down unconsented deepfakes and voice clones and allowing rights​​holders to seek damages from creators and distributors of AI models designed specifically to create harmful digital replicas.

The legislation’s thoughtful, measured approach preserves existing state causes of action and rights of publicity, including Tennessee’s groundbreaking ELVIS Act. It also contains carefully calibrated exceptions to protect free speech, open discourse and creative storytelling – without trampling the underlying need for real, enforceable protection against the vast range of invasive and harmful deepfakes and voice clones.

Human Artistry Campaign Senior Advisor Dr. Moiya McTier released the following statement in support of the legislation:

​“The Human Artistry Campaign stands for preserving essential qualities of all individuals – beginning with a right to their own voice and image. The NO FAKES Act is an important step towards necessary protections that also support free speech and AI development. The Human Artistry Campaign commends Senators Blackburn, Coons, Tillis, and Klobuchar and Representatives Salazar, Dean, Moran, Balint, and their colleagues for shepherding bipartisan support for this landmark legislation, a necessity for every American to have a right to their own identity as highly realistic voice clones and deepfakes become more pervasive.

Dr. Moiya McTier, Human Artistry Campaign Senior Advisor

By establishing clear rules for the new federal voice and image right, the NO FAKES Act will power innovation and responsible, pro-human uses of powerful AI technologies while providing strong protections for artists, minors and others. This important bill has cross-sector support from Human Artistry Campaign members and companies such as OpenAI, Google, Amazon, Adobe and IBM. The NO FAKES Act is a strong step forward for American leadership that erects clear guardrails for AI and real accountability for those who reject the path of responsibility and consent.

Learn more & let your representatives know Congress should pass NO FAKES Act here.

​# # #

ABOUT THE HUMAN ARTISTRY CAMPAIGN: The Human Artistry Campaign is the global initiative for the advancement of responsible AI – working to ensure it develops in ways that strengthen the creative ecosystem, while also respecting and furthering the indispensable value of human artistry to culture. Across 34 countries, more than 180 organizations have united to protect every form of human expression and creative endeavor they represent – journalists, recording artists, photographers, actors, songwriters, composers, publishers, independent record labels, athletes and more. The growing coalition champions seven core principles for keeping human creativity at the center of technological innovation. For further information, please visit humanartistrycampaign.com

@human_artistry Calls Out AI Voice Cloning

Here’s just one reason why we can’t trust Big Tech for opt out (or really any other security that stops them from doing what they want to do)

@ArtistRights Institute’s UK Government Comment on AI and Copyright: Why Can’t Creators Call 911?

We will be posting excerpts from the Artist Rights Institute’s comment in the UK’s Intellectual Property Office proceeding on AI and copyright. That proceeding is called a “consultation” where the Office solicits comments from the public (wherever located) about a proposed policy.

In this case it was the UK government’s proposal to require creators to “opt out” of AI data scraping by expanding the law in the UK governing “text and data mining” which is what Silicon Valley wants in a big way. This idea produced an enormous backlash from the creative community that we’ll also be covering in coming weeks as it’s very important that Trichordist readers be up to speed on the latest skulduggery by Big Tech in snarfing down all the world’s culture to train their AI (which has already happened and now has to be undone). For a backgrounder on the “text and data mining” controversy, watch this video by George York of the Digital Creators Coalition speaking at the Artist Rights Institute in DC.

In this section of the comment we offer a simple rule of thumb or policy guideline by which to measure the Government’s rules (which could equally apply in America): Can an artist file a criminal complaint against someone like Sam Altman?

If an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail

Why Can’t Creators Call 999 [or 911]?

We suggest a very simple policy guideline—if an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail.  Alternatively, if an artist can call the police and file a criminal complaint against a Sam Altman or a Sergei Brin for criminal copyright infringement, now we are getting somewhere.

This requires that there be a clear “red light/green light” instruction that can easily be understood and applied by a beat copper.  This may seem harsh, but in our experience with the trillion-dollar market cap club, the only thing that gets their attention is a legal action that affects behavior rather than damages.  Our experience suggests that what gets their attention most quickly is either an injunction to stop the madness or prison to punish the wrongdoing. 

As a threshold matter, it is clear that AI platforms intend to continue scraping all the world’s culture for their purposes without obtaining consent or notifying rightsholders.  It is likely that the bigger platforms already have.  For example, we have found our own writings included in CoPilot outputs.  Not only did we not consent to that use, but we were also never asked.  Moreover, CoPilot’s use of these works clearly violates our terms of service.  This level of content scraping is hardly what was contemplated with the “data mining” exceptions. 

@human_artistry Press Release: Senators Introduce COPIED Act to Combat AI Deepfakes

Senators Cantwell, Blackburn, and Heinrich introduce the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), Giving Artists New Tools to Protect Against Deepfakes
“Deepfakes pose an existential threat to our culture and society, making it hard to believe what we see and hear and leaving individual creators vulnerable as tech companies use our art without consent while AI-generated content leads to confusion about what is real. Requiring transparency is a meaningful step that will help protect us all – ensuring that nonconsensual, harmful content can be removed quickly and providing a clear origin when our life’s work has been used.” 
– Dr. Moiya McTier, Human Artistry Campaign Senior Advisor
With widespread creative community support from organizations including the Artist Rights Alliance, SAG-AFTRA, the Recording Academy, RIAA, NMPA, NSAI, and more, the bill would set new federal transparency guidelines for marking, authenticating and detecting AI-generated content, protect journalists, actors and artists against AI-driven theft, and hold violators accountable for abuses.  

Creates Transparency Standards: Requires the National Institute of Standards and Technology (NIST) to develop guidelines and standards for content provenance information, watermarking and synthetic content detection. These standards will promote transparency to identify if content has been generated or manipulated by AI, as well as where AI content originated. The bill also directs NIST to develop cybersecurity measures to prevent tampering with provenance and watermarking on AI content. 

Puts Journalists, Artists and Musicians in Control of Their Content: Requires providers of AI tools used to generate creative or journalistic content to allow owners of that content to attach provenance information to it and prohibits its removal. The bill prohibits the unauthorized use of content with provenance information to train AI models or generate AI content. These measures give content owners—journalists, newspapers, artists, songwriters, and others—the ability to protect their work and set the terms of use for their content, including compensation. 

Gives Individuals a Right to Sue Violators: Authorizes the Federal Trade Commission (FTC) and state attorneys general to enforce the bill’s requirements.  It also gives newspapers, broadcasters, artists, and other content owners the right to bring suit in court against platforms or others who use their content without permission.

Prohibits Tampering with or Disabling AI Provenance Information: Currently, there is no law that prohibits removing, disabling, or tampering with content provenance information. The bill prohibits anyone, including internet platforms, search engines and social media companies, from interfering with content provenance information in these ways.  

Search is Ripping Us Off Again, this time for AI

By Chris Castle

In case you think that infringement of copyrights in AI training happens only to rock stars, photographers, record companies or movie studios, let me give you a personal anecdote that might cause you to reconsider. I asked Bing’s Copilot AI tool to list the deal points in a record producer agreement, and this is what I got back:

Copilot

Certainly! When negotiating a producer agreement, several key points should be considered. Here are some common negotiation points from the artist’s perspective:

  1. Scope of Engagement:
    1. The producer is generally engaged for one album project at a time.
    2. Clarify the scope of work, including the number of songs or recordings to be produced.
  2. Budget and Compensation:
    1. Determine the budget for the project (e.g., $10,000 in your case).
    2. Agree on the producer’s compensation, which may include an upfront fee, points on sales, or a combination.
  3. Ownership and Rights:
    1. Address ownership of the master recordings:
      1. Will the artist own and control the masters?
      2. Does the producer retain any rights?
  4. Credits and Royalties:
    1. Specify how the producer will be credited on releases.
    2. Discuss any royalties or backend points for the producer.
  5. Recording Schedule:
    1. Set a timeline for recording sessions.
    2. Define the two-week period for your project.
  6. Logistics:
    1. Determine where the recordings will take place.
    2. Outline the artist’s approval process.
    3. Define the producer’s deliverables.
  7. Termination Clause:
    1. Include provisions for termination if either party breaches the agreement.
    2. Address payment for work completed up to that point.

Remember that each negotiation is unique, and it’s essential to tailor the agreement to your specific needs and circumstances. Legal advice is recommended to ensure compliance with local laws and industry standards1 3 4.

Source(s)

1. An Artist’s Guide to Negotiating Record Producer Agreements – Medium

2. Negotiating a Music Producer Agreement: 7 Key Issues

3. 5 Things to Look for in a Music Producer Agreement – Lawyer Drummer

4. How to Negotiate: Steps, Phrases & Tips to Win the Deal – Selling Signals

Notice that Copilot lists “Source(s)” and that source #1 is an article from the Medium website.

My article.

I’ve written a number of articles about negotiating producer agreements and I can categorically say I haven’t given permission or been asked to give permission for any of those articles to be scraped and used to train AI. So how did it come to be in Copilot’s training materials? 

My guess is that Medium (and probably a lot of other sites) were crawled by Bing’s spider and that copies of Medium pages were stored in Bing’s (or Microsoft’s) data centers. When Microsoft launched its Copilot AI tool, one could imagine that the temptation would have been strong to use those copies as part of the training tools for Copilot. In fact, the Verge reports that “companies around the web are using your site and its data to build massive sets of training data, in order to build models and products that may not acknowledge your existence at all.” Can you say Section 230 or DMCA? Maybe there will be a question at the April 11 House Energy & Commerce hearing on Section 230.

So Google-style “innovation” has a self-destructive flavor consistent with their oikophobic capitalism. Greater Silicon Valley’s view of culture in general and copyright in particular is as sources of wealth extracted by destruction–you know, creative destruction, permissionless innovation, etc. (It’s no wonder Google was inexorably attracted to defense contracting despite all the “don’t be evil” hoorah. After all, what creates massive wealth faster than convincing governments to pay big money to blow things up that must be replaced by ever more big money to blow even more things up.)

Are you surprised then that two of the biggest operators in the AI space are the search engine operators Google and Microsoft? This is another example of how Big Tech helps itself to your data and work product without you even knowing it’s happening. So now what? I now know I’m being ripped off, and I’m wondering if Medium is in on it.

The Verge tells us:

The ability to download, store, organize, and query the modern internet gives any company or developer something like the world’s accumulated knowledge to work with. In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internet’s most valuable commodities. That has caused internet providers of all sorts to reconsider the value of the data on their servers, and rethink who gets access to what. 

Ya think?

What would Lars say? Artificial Intelligence: Nobel or RICO?

All the true promise of AI does not require violating writers, artists, photographers, voice actors etc copyrights and rights of publicity. You know, stuff like reading MRIs and X-rays, developing pharmaceuticals, advanced compounds, new industrial processes, etc.

All the shitty aspects of AI DO require intentional mass copyright infringement (a RICO predicate BTW). You know stuff like bots, deep fakes, autogenerated “yoga mat” music, SEO manipulation, autogenerated sports coverage, commercial chat bots, fake student papers, graphic artist knockoffs, robot voice actors etc. But that’s where the no-value-add-parasitic-free-rider-easy-money is to be made. That’s why the parasitic free-riding VCs and private equity want to get a “fair use” copyright exemption.

Policy makers should understand that if they want to reduce the potential harms of AI they need to protect and reinforce intellectual property rights of individuals. It is a natural (and already existing) brake on harmful AI. What we don’t need is legislative intervention that makes it easier to infringe IP rights and then try to mitigate (the easily predictable and obvious) harms with additional regulation.

This is what happened with Napster and internet 1.0. The DMCA copyright infringement safe harbor for platforms unleashed all sorts of negative externalities that were never fairly mitigated by subsequent regulation.

Why do songwriters get 0.0009 a stream on streaming platforms? Because the platforms used the threat of the DMCA copyright safe harbor by “bad actors” (often connected to the “good actors” via shared board members and investors*) to create a market failure that destroyed the value of songs. To “fix” the problem federal legislation tasks the Copyright Royalty Board in LOC to set royalty rates and forced songwriters to license to the digital platforms (songwriters can not opt out). The royalty setting process was inevitably captured by the tech companies and that’s how you end up with 0.0009 per stream.

TBF the DMCA safe harbor requires the platforms to set up “technical measures” to prevent unlicensed use of copyrights, but this part of the DMCA safe harbor were never implemented and the federal government never bothered to enforce this part of the law. This is the Napster playbook all over again.

1. Unleash a technology that you know will be exploited by bad actors**.

2. Ask for federal intervention that essentially legalizes the infringing behavior.

3. The federal legislation effectively creates private monopoly or duopoly.

4. Trillions of dollars in wealth transferred from creators to a tiny cabal of no-value-add-parasitic-free-rider-easy-money VCs in silicon valley.

5. Lots of handwringing about the plight of creators.

6. Bullshit legislation that claims to help creators but actually mandates a below market rate for creators.

The funny thing is Lars Ulrich was right about Napster. [See our 2012 post Lars Was First and Lars Was Right.] At the time he was vilified by what in reality was a coordinated DC communication firm (working for Silicon Valley VCs) that masqueraded as grassroots operation.

But go back and watch the Charlie Rose debate between Lars Ulrich and Chuck D, everything Lars Ulrich said was gonna happen happened.

If Lars Ulrich hadn’t been cowed by a coordinated campaign by no-value-add-parasitic-free-rider-easy-money Silicon Valley VCs, he’d probably say the same thing about AI.

And he’d be right again.