United for Artists’ Rights: Amicus Briefs Filed in Vetter v. Resnik Support Global Copyright Termination for Songwriters and Authors: The Authors Guild, Inc., Dramatists Legal Defense Fund, Inc., Novelists, Inc., Romance Writers Of America, Inc., Society Of Composers & Lyricists, Inc. and Songwriters Guild Of America, Inc.

In Vetter v. Resnik, songwriter Cyril Vetter won his trial case in Baton Rouge allowing him to recover worldwide rights in his song “Double Shot of My Baby’s Love” after serving his 35 year termination notice on his former publisher, Resnik Music Group. The publisher appealed. The Fifth Circuit Court of Appeals will hear the case and currently is weighing whether U.S. copyright termination rights include “foreign” territories—a question that strikes at the heart of artists’ ability to reclaim their work worldwide (whatever “foreign” means).

Cyril’s attorney Tim Kappel explains the case if you need an explainer:

An astonishing number of friend of the court briefs were filed by many songwriter groups. We’re going to post them all and today’s brief is by The Authors Guild, Inc., Dramatists Legal Defense Fund, Inc., Novelists, Inc., Romance Writers Of America, Inc., Society Of Composers & Lyricists, Inc. and Songwriters Guild Of America, Inc.

We believe the answer must be yes. Congress gave creators and their heirs the right to regain control of their work after decades, and that promise means little if global rights are excluded. The outcome of this case could either reaffirm that promise—or open the door for multinational publishers to sidestep it entirely.

That’s why we’re sharing friend of the court briefs from across the creative communities. Each one brings a different perspective—but all defend the principle that artists deserve a real, global right to take back what’s theirs, because as Chris said Congress did not give authors a second bite at half the apple.

Read the latest amicus brief below.

Big Beautiful AI Safe Harbor asks If David Sacks wants to Make America Screwed Again?

In a dramatic turn of events, Congress is quietly advancing a 10-year federal safe harbor for Big Tech that would block any state and local regulation of artificial intelligence (AI). That safe harbor would give Big Tech another free ride on the backs of artists, authors, consumers, all of us and our children. It would stop cold the enforcement of state laws to protect consumers like the $1.370 billion dollar settlement Google reached with the State of Texas last week for grotesque violations of user privacy. The bill would go up on Big Tech’s trophy wall right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

Introduced through the House Energy and Commerce Committee as part of a broader legislative package branded with President Trump’s economic agenda, this safe harbor would prevent states from enforcing or enacting any laws that address the development, deployment, or oversight of AI systems. While couched as a measure to ensure national uniformity and spur innovation, this proposal carries serious consequences for consumer protection, data privacy, and state sovereignty. It threatens to erase hard-fought state-level protections that shield Americans from exploitative child snooping, data scraping, biometric surveillance, and the unauthorized use of personal and all creative works. This post unpacks how we got here, why it matters, and what can still be done to stop it.

The Origins of the New Safe Harbor

The roots of the latest AI safe harbor lie in a growing push from Silicon Valley-aligned political operatives and venture capital influencers, many of whom fear a patchwork of state-level consumer protection laws that would stop AI data scraping. Among the most vocal proponents is tech entrepreneur-turned White House crypto czar David Sacks, who has advocated for federal preemption of state AI rules in order to protect startup innovation from what he and others call regulatory overreach also known as state “police powers” to protect state residents.

If my name was “Sacks” I’d probably be a bit careful about doing things that could get me fired. His influence reportedly played a role in shaping the safe harbor’s timing and language, leveraging connections on Capitol Hill to attach it to a larger pro-business package of legislation. That package—marketed as a pillar of President Trump’s economic plan—was seen as a convenient vehicle to slip through controversial provisions with minimal scrutiny. You know, let’s sneak one past the boss.

Why This Is Dangerous for Consumers and Creators

The most immediate danger of the AI safe harbor is its preemption of state protections at a time when AI technologies are accelerating unchecked. States like California, Illinois, and Virginia have enacted—or are considering—laws to limit how companies use AI to analyze facial features, scan emails, extract audio, or mine creative works from social media. The AI mantra is that they can snarf down “publicly available data” which essentially means everything that’s not behind a paywall. Because there is no federal AI regulation yet, state laws are crucial for protecting vulnerable populations, including children whose photos and personal information are shared by parents online. Under the proposed AI safe harbor, such protections would be nullified for 10 years–and don’t think it won’t be renewed.

Without the ability to regulate AI at the state level, we could see our biometric data harvested without consent. Social media posts—including photos of babies, families, and school events—could be scraped and used to train commercial AI systems without transparency or recourse. Creators across all copyright categories could find their works ingested into large language models and generative tools without license or attribution. Emails and other personal communications could be fed into AI systems for profiling, advertising, or predictive decision-making without oversight.

While federal regulation of AI is certainly coming this AI safe harbor includes no immediate substitute. Instead, it freezes state level regulatory development entirely for a decade—an eternity in the technology world—during which time the richest companies in the history of commerce can entrench themselves further with little fear of accountability. And it likely will provide a blueprint for federal legislation when it comes.

A Strategic Misstep for Trump’s Economic Agenda: Populism or Make America Screwed Again?

Ironically, attaching the moratorium to a legislative package meant to symbolize national renewal may ultimately undermine the very populist and sovereignty-based themes that President Trump has championed. By insulating Silicon Valley firms from state scrutiny, the legislation effectively prioritizes the interests of data-rich corporations over the privacy and rights of ordinary Americans. It hands a victory to unelected tech executives and undercuts the authority of governors, state legislators, and attorney generals who have stepped in where federal law has lagged behind. So much for that states are “laboratories of democracy” jazz.

Moreover, the manner in which the safe harbor was advanced legislatively—slipped into what is supposed to be a reconciliation bill without extensive hearings or stakeholder input—is classic pork and classic Beltway maneuvering in smoke filled rooms. Critics from across the political spectrum have noted that such tactics cheapen the integrity of any legislation they touch and reflect the worst of Washington horse-trading.

What Can Be Done to Stop It

The AI safe harbor is not a done deal. There are several procedural and political tools available to block or remove it from the broader legislative package.

1. Committee Intervention – Lawmakers on the House Energy and Commerce Committee or the Rules Committee can offer amendments to strip or revise the moratorium before it proceeds to the full House.
2. House Floor Action – Opponents of the moratorium can offer floor amendments during debate to strike the provision. This requires coordination and support from members across both parties.
3. Senate “Byrd Rule” Challenge and Holds – Because reconciliation bills must be budget-related, the Senate Parliamentarian can strike the safe harbor if it’s deemed “non-germane” which it certainly seems to be. Senators can formally raise this challenge.
4. Conference Committee Negotiation – If different versions of the legislation pass the House and Senate, the final language will be hashed out in conference. There is still time to remove the moratorium here.
5. Public Advocacy – Artists, parents, consumer advocates, and especially state officials can apply pressure through media, petitions, and direct outreach to lawmakers, highlighting the harms and democratic risks of federal preemption. States may be able to sue to block the safe harbor as unconstitutional (see Chris’s discussion of constitutionality) but let’s not wait to get to that point. It must be said that any such litigation poses a threat to Trump’s “Big Beautiful Bill” courtesy of David Sacks.

Conclusion

The AI safe harbor may have been introduced quietly, but there’s a growing backlash from all corners. Its consequences would be anything but subtle. If enacted, it would freeze innovation in AI accountability, strip states of their ability to protect residents, and expose Americans to widespread digital exploitation. While marketed as pro-innovation, the safe harbor looks more like a gift to data-hungry monopolies at the expense of federalist principles and individual rights.

It’s not too late to act, but doing so requires vigilance, transparency, and an insistence that even the most powerful Big Tech oligarchs remain subject to democratic oversight.

@ArtistRights Newsletter 4/14/25

The Artist Rights Watch podcast returns for another season! This week’s episode features AI Legislation, A View from Europe: Helienne Lindvall, President of the European Composer and Songwriter Alliance (ECSA) and ARI Director Chris Castle in conversation regarding current issues for creators regarding the EU AI Act and the UK Text and Data Mining legislation. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

AI Litigation: Kadrey v. Meta

Law Professors Reject Meta’s Fair Use Defense in Friend of the Court Brief

Ticketing
Viagogo failing to prevent potentially unlawful practices, listings on resale site suggest that scalpers are speculatively selling tickets they do not yet have (Rob Davies/The Guardian)

ALEC Astroturf Ticketing Bill Surfaces in North Carolina Legislation

ALEC Ticketing Bill Surfaces in Texas to Rip Off Texas Artists (Chris Castle/MusicTechPolicy)

International AI Legislation

Brazil’s AI Act: A New Era of AI Regulation (Daniela Atanasovska and Lejla Robeli/GDPR Local)

Why robots.txt won’t get it done for AI Opt Outs (Chris Castle/MusicTechPolicy)

Feature TranslationHow has the West’s misjudgment of China’s AI ecosystem distorted the global technology competition landscape (Jeffrey Ding/ChinAI)

Unethical AI Training Harms Creators and Society, Argues AI Pioneer (Ed Nawotka/Publishers Weekly) 

AI Ethics

Céline Dion Calls Out AI-Generated Music Claiming to Feature the Iconic Singer Without Her Permission (Marina Watts/People)

Splice CEO Discusses Ethical Boundaries of AI in Music​ (Nilay Patel/The Verge)

Spotify’s Bold AI Gamble Could Disrupt The Entire Music Industry (Bernard Marr/Forbes)

Books

Apple in China: The Capture of the World’s Greatest Company by Patrick McGee (Coming May 13)

PRESS RELEASE: @Human_Artistry Campaign Endorses NO FAKES Act to Protect Personhood from AI

For Immediate Release

HUMAN ARTISTRY CAMPAIGN ENDORSES NO FAKES ACT

Bipartisan Bill Reintroduced by Senators Blackburn, Coons, Tillis, & Klobuchar and Representatives Salazar, Dean, Moran, Balint and Colleagues

Create New Federal Right for Use of Voice and Visual Likeness
in Digital Replicas

Empowers Artists, Voice Actors, and Individual Victims to Fight Back Against
AI Deepfakes and Voice Clones

WASHINGTON, DC (April 9, 2025) – Amid global debate over guardrails needed for AI, the Human Artistry Campaign today announced its support for the reintroduced “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2025” (“NO FAKES Act”) – landmark legislation giving every person an enforceable new federal intellectual property right in their image and voice. 

Building off the original NO FAKES legislation introduced last Congress, the updated bill was reintroduced today by Senators Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC), Amy Klobuchar (D-MN) alongside Representatives María Elvira Salazar (R-FL-27), Madeleine Dean (D-PA-4), Nathaniel Moran (R-TX-1), and Becca Balint (D-VT-At Large) and bipartisan colleagues.

The legislation sets a strong federal baseline protecting all Americans from invasive AI-generated deepfakes flooding digital platforms today. From young students bullied by non-consensual sexually explicit deepfakes to families scammed by voice clones to recording artists and performers replicated to sing or perform in ways they never did, the NO FAKES Act provides powerful remedies requiring platforms to quickly take down unconsented deepfakes and voice clones and allowing rights​​holders to seek damages from creators and distributors of AI models designed specifically to create harmful digital replicas.

The legislation’s thoughtful, measured approach preserves existing state causes of action and rights of publicity, including Tennessee’s groundbreaking ELVIS Act. It also contains carefully calibrated exceptions to protect free speech, open discourse and creative storytelling – without trampling the underlying need for real, enforceable protection against the vast range of invasive and harmful deepfakes and voice clones.

Human Artistry Campaign Senior Advisor Dr. Moiya McTier released the following statement in support of the legislation:

​“The Human Artistry Campaign stands for preserving essential qualities of all individuals – beginning with a right to their own voice and image. The NO FAKES Act is an important step towards necessary protections that also support free speech and AI development. The Human Artistry Campaign commends Senators Blackburn, Coons, Tillis, and Klobuchar and Representatives Salazar, Dean, Moran, Balint, and their colleagues for shepherding bipartisan support for this landmark legislation, a necessity for every American to have a right to their own identity as highly realistic voice clones and deepfakes become more pervasive.

Dr. Moiya McTier, Human Artistry Campaign Senior Advisor

By establishing clear rules for the new federal voice and image right, the NO FAKES Act will power innovation and responsible, pro-human uses of powerful AI technologies while providing strong protections for artists, minors and others. This important bill has cross-sector support from Human Artistry Campaign members and companies such as OpenAI, Google, Amazon, Adobe and IBM. The NO FAKES Act is a strong step forward for American leadership that erects clear guardrails for AI and real accountability for those who reject the path of responsibility and consent.

Learn more & let your representatives know Congress should pass NO FAKES Act here.

​# # #

ABOUT THE HUMAN ARTISTRY CAMPAIGN: The Human Artistry Campaign is the global initiative for the advancement of responsible AI – working to ensure it develops in ways that strengthen the creative ecosystem, while also respecting and furthering the indispensable value of human artistry to culture. Across 34 countries, more than 180 organizations have united to protect every form of human expression and creative endeavor they represent – journalists, recording artists, photographers, actors, songwriters, composers, publishers, independent record labels, athletes and more. The growing coalition champions seven core principles for keeping human creativity at the center of technological innovation. For further information, please visit humanartistrycampaign.com

@Artist Rights Institute Newsletter 3/24/25

The Artist Rights Institute’s news digest Newsletter

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

Songwriters and Union Organizing

RICO and Criminal Copyright Infringement

AI Piracy

@alexreisner: Search LibGen, the Pirated-Books Database That Meta Used to Train AI (Alex Reisner/The Atlantic)

OpenAI and Google’s Dark New Campaign to Dismantle Artists’ Protections (Brian Merchant/Blood in the Machine)

Alden newspapers slam OpenAI, Google’s AI proposals (Sara Fischer/Axios)

AI Litigation

French Publishers and Authors Sue Meta over Copyright Works Used in AI Training (Kelvin Chan/AP)

DC Circuit Affirms Human Authorship Required for Copyright (David Newhoff/The Illusion of More)

OpenAI Asks White House for Relief From State AI Rules (Jackie Davalos/Bloomberg)

Microsoft faces FTC antitrust probe over AI and licensing practices (Prasanth Aby Thomas/Computer World)

Google and its Confederate AI Platforms Want Retroactive Absolution for AI Training Wrapped in the American Flag(Chris Castle/MusicTechPolicy)

AI and Human Rights

Human Rights and AI Opt Out (Chris Castle/MusicTechPolicy)

Say Goodbye to Net Zero: The AI Data Center Corporate Welfare Scam

We’ve reported for years about how data centers are a good explanation for why Senators like Ron Wyden seem to always inject themselves into copyright legislation for the sole purpose of slowing it down or killing it, watering it down, or turning it on its head. Why would a senator from Oregon–a state that gave us Courtney Love, Esperanza Spalding, The Decemberists, Sleater-Kinney and the Dandy Warhols–be so such an incredibly basic, no-vibe cosplayer?

Easy answer–he does the bidding of the Big Tech data center operators sucking down that good taxpayer subsidized Oregon hydroelectric power–literally and figuratively. Big Tech loves them some weak copyright and expanded loopholes that let them get away with some hard core damage to artists. Almost as much as they love flexing political muscle.

Senator Wyden with his hand in his own pocket.

This is coming up again in the various public comments on artificial intelligence, which is the data hog of data hogs. For example, the Artist Rights Institute made this point using Oregon as an example in the recent UK Intellectual Property Office call for public comments that produced a huge push back on the plans of UK Prime Minister Sir Kier Starmer to turn Britain into a Google lake for AI, especially the build out of AI data centers.

Google Data Center at The Dalles, Oregon

The thrust of the Oregon discussion in the ARI comment is that Oregon’s experience with data centers should be food for thought in other places (like the UK) as what seems to happen is electricity prices for local rate payers increase while data centers have negotiated taxpayer subsidized discounts. Yes, that old corporate welfare strikes again.

Oregon Taxpayers’ Experience with Crowding Out by Data Centres is a Cautionary Tale for UK

We call the IPO’s attention to the real-world example of the U.S. State of Oregon, a state that is roughly the geographical size of the UK.  Google built the first Oregon data centre in The Dalles, Oregon in 2006.  Oregon now has 125 of the very data centres that Big Tech will necessarily need to build in the UK to implement AI.  In other words, Oregon was sold much the same story that Big Tech is selling you today.

The rapid growth of Oregon data centres driven by the same tech giants like Amazon, Apple, Google, Oracle, and Meta, has significantly increased Oregon’s demand for electricity. This surge in demand has led to higher power costs, which are often passed on to local rate payers while data centre owners receive tax benefits.  This increase in price foreshadows the market effect of crowding out local rate payers in the rush for electricity to run AI—demand will only increase and increase substantially as we enter what the International Energy Agency has called “the age of electricity”.[1]

Portland General Electric, a local power operator, has faced increasing criticism for raising rates to accommodate the encroaching electrical power needs of these data centers. Local residents argue that they unfairly bear the increased electrical costs while data centers benefit from tax incentives and other advantages granted by government.[2] 

This is particularly galling in that the hydroelectric power in Oregon is largely produced by massive taxpayer-funded hydroelectric and other power projects built long ago.[3]  The relatively recent 125 Oregon data centres received significant tax incentives during their construction to be offset by a promise of future jobs.  While there were new temporary jobs created during the construction phase of the data centres, there are relatively few permanent jobs required to operate them long term as one would expect from digitized assets owned by AI platforms.

Of course, the UK has approximately 16 times the population of Oregon.  Given this disparity, it seems plausible that whatever problems that Oregon has with the concentration of data centers, the UK will have those same problems many times over due to the concentration of populations.


[1] International Energy Agency, Electricity 2025 (Revised Edition Feb. 2025) available at https://iea.blob.core.windows.net/assets/0f028d5f-26b1-47ca-ad2a-5ca3103d070a/Electricity2025.pdf.

 [2] Jamie Parfitt, As PGE closes in on another rate increase, are the costs of growing demand for power being borne fairly? KGW8 News (Dec. 13, 2024) available at https://www.kgw.com/article/news/local/the-story/pge-rate-increase-data-centers-power-cost-demand-growth/283-399b079b-cbf5-41cf-8190-4c5f204d2d90 (“Like utilities nationwide, PGE is experiencing a surge in requests for new, substantial amounts of electricity load, including from advanced manufacturing, data centers and AI-related companies.”) 

[3] See, e.g., State of Oregon, Facilities Under Energy Facility Siting Council available at https://www.oregon.gov/energy/facilities-safety/facilities/Pages/Facilities-Under-EFSC.aspx

Will AI Produce the Oregon Effect Internationally?

So let’s look at a quick and dirty comparison of the prices that local residents and businesses pay for electricity compared to what data centers in the same states pay. We’re posting this chart because ya’ll love numbers, but mostly to start discussion and research into just how much of an impact all these data centers might have on the supply and demand price setting in a few representative state and countries. But remember this–our experience with Senator Wyden should tell you that all these data centers will give Big Tech even more political clout than they already have.

The chart shows the percentage difference between the residential rate and the data center rate for energy in each state measured. The percentage difference is calculated as: ((Residential Rate – Data Center Rate) ÷ Residential Rate) × 100. When we say “~X% lower” we mean that the data center price per kilowatt hour (¢/kWh) is approximately X% lower than the residential rate, all based on data from Choose Energy or Electricity Plans. We don’t pretend to be energy analysts, so if we got this wrong, someone will let us know.

On a country by country comparison, here’s some more food for thought:

Data is from (1) here and (2) here

As you can see, most of the G7 countries have significantly higher electricity prices (and therefore potentially higher data prices) than the US and Canada. This suggests that Big Tech data centers will produce the Oregon Effect in those countries with higher residential energy costs in a pre-AI world. That in turn suggests that Big Tech is going to be coming around with their tin cup for corporate welfare to keep their data center electric bills low, or maybe they’ll just buy the electric plants. For themselves.

Either way, it’s unlikely that this data center thumb on the scale and the corporate welfare that goes with it will cause energy prices to decline. And you can just forget that whole Net Zero thing.

If you don’t like where this is going, call your elected representative!

Conversation with @KCEsq and @MusicTechPolicy on a Songwriter Union, Better Royalties and Health Care for Songwriters

Forming a songwriter union is a hot topic these days, thank you Chappell Roan! Artist Rights Institute put a casual poll in the field to get a sense of what people are thinking about this issue. If you haven’t taken that poll yet, please join us on Survey Monkey here (all results are anonymized) we would love to get your feedback. We will post the results on Trichordist.

Reaction to the poll led to an Artist Rights Institute podcast with Chris Castle and Kevin Casini who both fans of the Trichordist audience, so naturally they wanted to launch the podcast here. There are a number of resources mentioned in the podcast that we have linked to below. Please leave comments if you have questions!

Check out the video with Kevin and Chris, and while you’re on the Artist Rights Institute’s cool new YouTube channel subscribe and bookmark the Artist Rights Symposium videos!

Important resources:

Union Organizing and Union Health Care Insurance Plans

National Labor Relations Board

AFL-CIO Organizing Institute

American Federation of Musicians

SAG-AFTRA

Health Care:

Health Alliance for Austin Musician http://www.myhaam.org Musician Services (512) 541-4226 (opt 2).

Music Health Alliance https://www.musichealthalliance.com Request assistance

American Association of Independent Music Benefits Store

Mental Health

SIMS Foundation (Austin) 512-494-1007

Industry-wide Agreements

See discussion of Canada’s Mechanical License Agreement https://musictechpolicy.com/2012/01/1…

Controlled Compositions

Copyright Office Circular on Work For Hire Explainer

Controlled Compositions Part 1 https://musictechpolicy.com/2010/03/2… and Controlled Compositions and Frozen Mechanicals https://musictechpolicy.com/2020/10/1…

We will be coming back to this topic soon. Feel free to leave comments if you have questions or want us to focus on any particular point.

Copyright 2025 Artist Rights Institute. All Rights Reserved. This video or any transcript may not be used for text or data mining or for the purpose of training artificial intelligence models or systems.

@ArtistRights Institute’s UK Government Comment on AI and Copyright: Why Can’t Creators Call 911?

We will be posting excerpts from the Artist Rights Institute’s comment in the UK’s Intellectual Property Office proceeding on AI and copyright. That proceeding is called a “consultation” where the Office solicits comments from the public (wherever located) about a proposed policy.

In this case it was the UK government’s proposal to require creators to “opt out” of AI data scraping by expanding the law in the UK governing “text and data mining” which is what Silicon Valley wants in a big way. This idea produced an enormous backlash from the creative community that we’ll also be covering in coming weeks as it’s very important that Trichordist readers be up to speed on the latest skulduggery by Big Tech in snarfing down all the world’s culture to train their AI (which has already happened and now has to be undone). For a backgrounder on the “text and data mining” controversy, watch this video by George York of the Digital Creators Coalition speaking at the Artist Rights Institute in DC.

In this section of the comment we offer a simple rule of thumb or policy guideline by which to measure the Government’s rules (which could equally apply in America): Can an artist file a criminal complaint against someone like Sam Altman?

If an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail

Why Can’t Creators Call 999 [or 911]?

We suggest a very simple policy guideline—if an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail.  Alternatively, if an artist can call the police and file a criminal complaint against a Sam Altman or a Sergei Brin for criminal copyright infringement, now we are getting somewhere.

This requires that there be a clear “red light/green light” instruction that can easily be understood and applied by a beat copper.  This may seem harsh, but in our experience with the trillion-dollar market cap club, the only thing that gets their attention is a legal action that affects behavior rather than damages.  Our experience suggests that what gets their attention most quickly is either an injunction to stop the madness or prison to punish the wrongdoing. 

As a threshold matter, it is clear that AI platforms intend to continue scraping all the world’s culture for their purposes without obtaining consent or notifying rightsholders.  It is likely that the bigger platforms already have.  For example, we have found our own writings included in CoPilot outputs.  Not only did we not consent to that use, but we were also never asked.  Moreover, CoPilot’s use of these works clearly violates our terms of service.  This level of content scraping is hardly what was contemplated with the “data mining” exceptions.