@RickBeato on AI Artists

Is it at thing or is it disco? Our fave Rick Beato has a cautionary tale in this must watch video: AI can mimic but not truly create art. As generative tools get more prevalent, he urges thoughtful curation, artist-centered policies, and an emphasis on emotionally rich, human-driven creativity–also known as creativity. h/t Your Morning Coffee our favorite podcast.

Senator Cruz Joins the States on AI Safe Harbor Collapse— And the Moratorium Quietly Slinks Away

Silicon Valley Loses Bigly

In a symbolic vote that spoke volumes, the U.S. Senate decisively voted 99–1 to strike the toxic AI safe harbor moratorium from the vote-a-rama for the One Big Beautiful Bill Act (HR 1) according to the AP. Senator Ted Cruz, who had previously actively supported the measure, actually joined the bipartisan chorus in stripping it — an acknowledgment that the proposal had become politically radioactive.

To recap, the AI moratorium would have barred states from regulating artificial intelligence for up to 10 years, tying access to broadband and infrastructure funds to compliance. It triggered an immediate backlash: Republican governors, state attorneys general, parents’ groups, civil liberties organizations, and even independent artists condemned it as a blatant handout to Big Tech with yet another rent-seeking safe harbor.

Marsha Blackburn and Maria Cantwell to the Rescue

Credit where it’s due: Senator Marsha Blackburn (R–TN) was the linchpin in the Senate, working across the aisle with Sen. Maria Cantwell to introduce the amendment that finally killed the provision. Blackburn’s credibility with conservative and tech-wary voters gave other Republicans room to move — and once the tide turned, it became a rout. Her leadership was key to sending the signal to her Republican colleagues–including Senator Cruz–that this wasn’t a hill to die on.

Top Cover from President Trump?

But stripping the moratorium wasn’t just a Senate rebellion. This kind of reversal in must-pass, triple whip legislation doesn’t happen without top cover from the White House, and in all likelihood, Donald Trump himself. The provision was never a “last stand” issue in the art of the deal. Trump can plausibly say he gave industry players like Masayoshi Son, Meta, and Google a shot, but the resistance from the states made it politically untenable. It was frankly a poorly handled provision from the start, and there’s little evidence Trump was ever personally invested in it. He certainly didn’t make any public statements about it at all, which is why I always felt it was such an improbable deal point that it was always intended as a bargaining chip whether the staff knew it or not.

One thing is for damn sure–it ain’t coming back in the House which is another way you know you can stick a fork in it despite the churlish shillery types who are sulking off the pitch.

One final note on the process: it’s unfortunate that the Senate Parliamentarian made such a questionable call when she let the AI moratorium survive the Byrd Bath, despite it being so obviously not germane to reconciliation. The provision never should have made it this far in the first place — but oh well. Fortunately, the Senate stepped in and did what the process should have done from the outset.

Now what?

It ain’t over til it’s over. The battle with Silicon Valley may be over on this issue today, but that’s not to say the war is over. The AI moratorium may reappear, reshaped and rebranded, in future bills. But its defeat in the Senate is important. It proves that state-level resistance can still shape federal tech policy, even when it’s buried in omnibus legislation and wrapped in national security rhetoric.

Cruz’s shift wasn’t a betrayal of party leadership — it was a recognition that even in Washington, federalism still matters. And this time, the states — and our champion Marsha — held the line. 

Brava, madam. Well played.

This post first appeared on MusicTechPolicy

@human_artistry Campaign Letter Opposing AI Safe Harbor Moratorium in Big Beautiful Bill HR 1

Artist Rights Institute is pleased to support the Human Artistry Campaign’s letter to Senators Thune and Schumer opposing the AI safe harbor in the One Big Beautiful Bill Act. ARI joins with:

Opposition is rooted in the most justifiable reasons:

By wiping dozens of state laws off the books, the bill would undermine public safety, creators’ rights, and the ability of local communities to protect themselves from a fast-moving technology that is being rushed to the market by tech giants. State laws protecting people from invasive AI deepfakes would be at risk, along with a range of proposals designed to eliminate discrimination and bias in AI. For artists and creators, preempting state laws requiring Big tech to disclose the material they used to train their models, often to create new products that compete with the human creators’ originals, would make it difficult or impossible to prove this theft has occurred. As the Copyright Office’s Fair Use Report recently reaffirmed, many forms of this conduct are illegal under longstanding federal law. 

The moratorium is so vague that it is unclear whether it would actually prohibit states from addressing construction of data centers or the vast drain on the power grid to implement AI placement in states. This is a safe harbor on steroids and terrible for all creators.

@ArtistRights Institute Newsletter 5/5/25

The Artist Rights Watch podcast returns for another season! This week’s episode features Chris Castle on An Artist’s Guide to Record Releases Part 2. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

Texas Scalpers Bill of Rights Legislation

Can this Texas House bill help curb high ticket prices? Depends whom you ask (Marcheta Fornoff/KERA News)

Texas lawmakers target ticket fees and resale restrictions in new legislative push (Abigail Velez/CBS Austin)

@ArtistRights Institute opposes Texas Ticketing Legislation the “Scalpers’ Bill of Rights” (Chris Castle/Artist Rights Watch)

Streaming

Spotify’s Earnings Points To A “Catch Up” On Songwriter Royalties At Crb For Royalty Justice (Chris Castle/MusicTechPolicy)

Streaming Is Now Just As Crowded With Ads As Old School TV (Rick Porter/Hollywood Reporter)

Spotify Stock Falls On Music Streamer’s Mixed Q1 Report (Patrick Seitz/Investors Business Daily)

Economy

The Slowdown at Ports Is a Warning of Rough Economic Seas Ahead (Aarian Marshall/Wired)

What To Expect From Wednesday’s Federal Reserve Meeting (Diccon Hyatt/Investopedia)

Spotify Q1 2025 Earnings Call: Daniel Ek Talks Growth, Pricing, Superfan Products, And A Future Where The Platform Could Reach 1bn Subscribers (Murray Stassen/Music Business Worldwide)

Artist Rights and AI

SAG-AFTRA National Board Approves Commercials Contracts That Prevent AI, Digital Replicas Without Consent (JD Knapp/The Wrap)

Generative AI providers see first steps for EU code of practice on content labels (Luca Bertuzzi/Mlex)

A Judge Says Meta’s AI Copyright Case Is About ‘the Next Taylor Swift’ (Kate Knibbs/Wired)

Antitrust

Google faces September trial on ad tech antitrust remedies (David Shepardson and Jody Godoy/Reuters)

TikTok

Ireland fines TikTok 530 million euros for sending EU user data to China (Ryan Browne/CNBC)

@ArtistRights Newsletter 4/14/25

The Artist Rights Watch podcast returns for another season! This week’s episode features AI Legislation, A View from Europe: Helienne Lindvall, President of the European Composer and Songwriter Alliance (ECSA) and ARI Director Chris Castle in conversation regarding current issues for creators regarding the EU AI Act and the UK Text and Data Mining legislation. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

AI Litigation: Kadrey v. Meta

Law Professors Reject Meta’s Fair Use Defense in Friend of the Court Brief

Ticketing
Viagogo failing to prevent potentially unlawful practices, listings on resale site suggest that scalpers are speculatively selling tickets they do not yet have (Rob Davies/The Guardian)

ALEC Astroturf Ticketing Bill Surfaces in North Carolina Legislation

ALEC Ticketing Bill Surfaces in Texas to Rip Off Texas Artists (Chris Castle/MusicTechPolicy)

International AI Legislation

Brazil’s AI Act: A New Era of AI Regulation (Daniela Atanasovska and Lejla Robeli/GDPR Local)

Why robots.txt won’t get it done for AI Opt Outs (Chris Castle/MusicTechPolicy)

Feature TranslationHow has the West’s misjudgment of China’s AI ecosystem distorted the global technology competition landscape (Jeffrey Ding/ChinAI)

Unethical AI Training Harms Creators and Society, Argues AI Pioneer (Ed Nawotka/Publishers Weekly) 

AI Ethics

Céline Dion Calls Out AI-Generated Music Claiming to Feature the Iconic Singer Without Her Permission (Marina Watts/People)

Splice CEO Discusses Ethical Boundaries of AI in Music​ (Nilay Patel/The Verge)

Spotify’s Bold AI Gamble Could Disrupt The Entire Music Industry (Bernard Marr/Forbes)

Books

Apple in China: The Capture of the World’s Greatest Company by Patrick McGee (Coming May 13)

PRESS RELEASE: @Human_Artistry Campaign Endorses NO FAKES Act to Protect Personhood from AI

For Immediate Release

HUMAN ARTISTRY CAMPAIGN ENDORSES NO FAKES ACT

Bipartisan Bill Reintroduced by Senators Blackburn, Coons, Tillis, & Klobuchar and Representatives Salazar, Dean, Moran, Balint and Colleagues

Create New Federal Right for Use of Voice and Visual Likeness
in Digital Replicas

Empowers Artists, Voice Actors, and Individual Victims to Fight Back Against
AI Deepfakes and Voice Clones

WASHINGTON, DC (April 9, 2025) – Amid global debate over guardrails needed for AI, the Human Artistry Campaign today announced its support for the reintroduced “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2025” (“NO FAKES Act”) – landmark legislation giving every person an enforceable new federal intellectual property right in their image and voice. 

Building off the original NO FAKES legislation introduced last Congress, the updated bill was reintroduced today by Senators Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC), Amy Klobuchar (D-MN) alongside Representatives María Elvira Salazar (R-FL-27), Madeleine Dean (D-PA-4), Nathaniel Moran (R-TX-1), and Becca Balint (D-VT-At Large) and bipartisan colleagues.

The legislation sets a strong federal baseline protecting all Americans from invasive AI-generated deepfakes flooding digital platforms today. From young students bullied by non-consensual sexually explicit deepfakes to families scammed by voice clones to recording artists and performers replicated to sing or perform in ways they never did, the NO FAKES Act provides powerful remedies requiring platforms to quickly take down unconsented deepfakes and voice clones and allowing rights​​holders to seek damages from creators and distributors of AI models designed specifically to create harmful digital replicas.

The legislation’s thoughtful, measured approach preserves existing state causes of action and rights of publicity, including Tennessee’s groundbreaking ELVIS Act. It also contains carefully calibrated exceptions to protect free speech, open discourse and creative storytelling – without trampling the underlying need for real, enforceable protection against the vast range of invasive and harmful deepfakes and voice clones.

Human Artistry Campaign Senior Advisor Dr. Moiya McTier released the following statement in support of the legislation:

​“The Human Artistry Campaign stands for preserving essential qualities of all individuals – beginning with a right to their own voice and image. The NO FAKES Act is an important step towards necessary protections that also support free speech and AI development. The Human Artistry Campaign commends Senators Blackburn, Coons, Tillis, and Klobuchar and Representatives Salazar, Dean, Moran, Balint, and their colleagues for shepherding bipartisan support for this landmark legislation, a necessity for every American to have a right to their own identity as highly realistic voice clones and deepfakes become more pervasive.

Dr. Moiya McTier, Human Artistry Campaign Senior Advisor

By establishing clear rules for the new federal voice and image right, the NO FAKES Act will power innovation and responsible, pro-human uses of powerful AI technologies while providing strong protections for artists, minors and others. This important bill has cross-sector support from Human Artistry Campaign members and companies such as OpenAI, Google, Amazon, Adobe and IBM. The NO FAKES Act is a strong step forward for American leadership that erects clear guardrails for AI and real accountability for those who reject the path of responsibility and consent.

Learn more & let your representatives know Congress should pass NO FAKES Act here.

​# # #

ABOUT THE HUMAN ARTISTRY CAMPAIGN: The Human Artistry Campaign is the global initiative for the advancement of responsible AI – working to ensure it develops in ways that strengthen the creative ecosystem, while also respecting and furthering the indispensable value of human artistry to culture. Across 34 countries, more than 180 organizations have united to protect every form of human expression and creative endeavor they represent – journalists, recording artists, photographers, actors, songwriters, composers, publishers, independent record labels, athletes and more. The growing coalition champions seven core principles for keeping human creativity at the center of technological innovation. For further information, please visit humanartistrycampaign.com

@human_artistry Calls Out AI Voice Cloning

Here’s just one reason why we can’t trust Big Tech for opt out (or really any other security that stops them from doing what they want to do)

@ArtistRights Institute Newsletter 3/17/25

This image has an empty alt attribute; its file name is ari-basic-logo-3.jpg

The Artist Rights Institute’s news digest Newsletter

Take our new confidential survey for publishers and songwriters!

UK AI Opt-Out Legislation

UK Music Chief Calls on Ministers to Drop Opposition Against Measures to Stop AI Firms Stealing Music

Human Rights and AI Opt Out (Chris Castle/MusicTechPolicy) 

Ticketing

New Oregon bill would ban speculative ticketing, eliminate hidden ticket sale fees, crack down on deceptive resellers (Diane Lugo/Salem Statesman Journal-USA Today)

AI Litigation/Legislation

French Publishers and Authors Sue Meta over Copyright Works Used in AI Training (Kelvin Chan/AP);

AI Layoffs

‘AI Will Be Writing 90% of Code in 3-6 Months,’ Says Anthropic’s Dario Amodei (Ankush Das/Analytics India)

Amazon to Target Managers in 2025’s Bold Layoffs Purge (Anna Verasai/The HR Digest)

AI Litigation: Kadrey v. Meta

Authors Defeat Meta’s Motion to Dismiss AI Case on Meta Removing Watermarks to Promote Infringement

Judge Allows Authors AI Copyright Infringement Lawsuit Against Meta to Move Forward (Anthony Ha/Techcrunch)

America’s AI Action Plan Request for Information

Google and Its Confederate AI Platforms Want Retroactive Absolution for AI Training Wrapped in the American Flag (Chris Castle/MusicTechPolicy)

Google Calls for Weakened Copyright and Export Rules in AI Policy Proposal (Kyle Wiggers/TechCrunch) 

Artist Rights Institute Submission

@ArtistRights Institute’s UK Government Comment on AI and Copyright: Why Can’t Creators Call 911?

We will be posting excerpts from the Artist Rights Institute’s comment in the UK’s Intellectual Property Office proceeding on AI and copyright. That proceeding is called a “consultation” where the Office solicits comments from the public (wherever located) about a proposed policy.

In this case it was the UK government’s proposal to require creators to “opt out” of AI data scraping by expanding the law in the UK governing “text and data mining” which is what Silicon Valley wants in a big way. This idea produced an enormous backlash from the creative community that we’ll also be covering in coming weeks as it’s very important that Trichordist readers be up to speed on the latest skulduggery by Big Tech in snarfing down all the world’s culture to train their AI (which has already happened and now has to be undone). For a backgrounder on the “text and data mining” controversy, watch this video by George York of the Digital Creators Coalition speaking at the Artist Rights Institute in DC.

In this section of the comment we offer a simple rule of thumb or policy guideline by which to measure the Government’s rules (which could equally apply in America): Can an artist file a criminal complaint against someone like Sam Altman?

If an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail

Why Can’t Creators Call 999 [or 911]?

We suggest a very simple policy guideline—if an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail.  Alternatively, if an artist can call the police and file a criminal complaint against a Sam Altman or a Sergei Brin for criminal copyright infringement, now we are getting somewhere.

This requires that there be a clear “red light/green light” instruction that can easily be understood and applied by a beat copper.  This may seem harsh, but in our experience with the trillion-dollar market cap club, the only thing that gets their attention is a legal action that affects behavior rather than damages.  Our experience suggests that what gets their attention most quickly is either an injunction to stop the madness or prison to punish the wrongdoing. 

As a threshold matter, it is clear that AI platforms intend to continue scraping all the world’s culture for their purposes without obtaining consent or notifying rightsholders.  It is likely that the bigger platforms already have.  For example, we have found our own writings included in CoPilot outputs.  Not only did we not consent to that use, but we were also never asked.  Moreover, CoPilot’s use of these works clearly violates our terms of service.  This level of content scraping is hardly what was contemplated with the “data mining” exceptions. 

Faux “Data Mining” is the Key that Picks the Lock of Human Expression

The Artist Rights Institute filed a comment in the UK Intellectual Property Office’s consultation on Copyright and AI that we drafted. The Trichordist will be posting excerpts from that comment from time to time.

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets. 

We strongly disagree that all the world’s culture can be squeezed through the keyhole of “data” to be “mined” as a matter of legal definitions.  In fact, a recent study by leading European scholars have found that data mining exceptions were never intended to excuse copyright infringement:

Generative AI is transforming creative fields by rapidly producing texts, images, music, and videos. These AI creations often seem as impressive as human-made works but require extensive training on vast amounts of data, much of which are copyright protected. This dependency on copyrighted material has sparked legal debates, as AI training involves “copying” and “reproducing” these works, actions that could potentially infringe on copyrights. In defense, AI proponents in the United States invoke “fair use” under Section 107 of the [US] Copyright Act [a losing argument in the one reported case on point[1]], while in Europe, they cite Article 4(1) of the 2019 DSM Directive, which allows certain uses of copyrighted works for “text and data mining.”

This study challenges the prevailing European legal stance, presenting several arguments:

1. The exception for text and data mining should not apply to generative AI training because the technologies differ fundamentally – one processes semantic information only, while the other also extracts syntactic information

2. There is no suitable copyright exception or limitation to justify the massive infringements occurring during the training of generative AI. This concerns the copying of protected works during data collection, the full or partial replication inside the AI model, and the reproduction of works from the training data initiated by the end-users of AI systems like ChatGPT….[2] 

Moreover, the existing text and data mining exception in European law was never intended to address AI scraping and training:

Axel Voss, a German centre-right member of the European parliament, who played a key role in writing the EU’s 2019 copyright directive, said that law was not conceived to deal with generative AI models: systems that can generate text, images or music with a simple text prompt.[3]

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets.  This lust for data, control and money will drive lobbyists and Big Tech’s amen corner to seek copyright exceptions under the banner of “innovation.”  Any country that appeases AI platforms in the hope of cashing in on tech at the expense of culture will be appeasing their way towards an inevitable race to the bottom.  More countries can be predictably expected to offer ever more accommodating terms in the face of Silicon Valley’s army of lobbyists who mean to engage in a lightning strike across the world.  The fight for the survival of culture is on.  The fight for survival of humanity may literally be the next one up.  

We are far beyond any reasonable definition of “text and data mining.”  What we can expect is for Big Tech to seek to distract both creators and lawmakers with inapt legal diversions such as trying to pretend that snarfing down all with world’s creations is mere “text and data mining”.  The ensuing delay will allow AI platforms to enlarge their training databases, raise more money, and further the AI narrative as they profit from the delay and capital formation.


[1] Thomson-Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., (Case No. 1:20-cv-00613 U.S.D.C. Del. Feb. 11, 2025) (Memorandum Opinion, Doc. 770 rejecting fair use asserted by defendant AI platform) available at https://storage.courtlistener.com/recap/gov.uscourts.ded.72109/gov.uscourts.ded.72109.770.0.pdf (“[The AI platform]’s use is not transformative because it does not have a ‘further purpose or different character’ from [the copyright owner]’s [citations omitted]…I consider the “likely effect [of the AI platform’s copying]”….The original market is obvious: legal-research platforms. And at least one potential derivative market is also obvious: data to train legal AIs…..Copyrights encourage people to develop things that help society, like [the copyright owner’s] good legal-research tools. Their builders earn the right to be paid accordingly.” Id. at 19-23).  See also Kevin Madigan, First of Its Kind Decision Finds AI Training Is Not Fair Use, Copyright Alliance (Feb. 12, 2025) available at https://copyrightalliance.org/ai-training-not-fair-use/ (discussion of AI platform’s landmark loss on fair use defense).

[2] Professor Tim W. Dornis and Professor Sebastian Stober, Copyright Law and Generative AI Training – Technological and Legal Foundations, Recht und Digitalisierung/Digitization and the Law (Dec. 20, 2024)(Abstract) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4946214

[3] Jennifer Rankin, EU accused of leaving ‘devastating’ copyright loophole in AI Act, The Guardian (Feb. 19, 2025) available at https://www.theguardian.com/technology/2025/feb/19/eu-accused-of-leaving-devastating-copyright-loophole-in-ai-act