Sir Lucian Grainge Just Drew the Brightest Line Yet on AI

by Chris Castle

Universal Music Group’s CEO Sir Lucian Grainge has put the industry on notice in an internal memo to Universal employees: UMG will not license any AI model that uses an artist’s voice—or generates new songs incorporating an artist’s existing songs—without that artist’s consent. This isn’t just a slogan; it’s a licensing policy, an advocacy position, and a deal-making leverage all rolled into one. After the Sora 2 disaster, I have to believe that OpenAI is at the top of the list.

Here’s the memo:

Dear Colleagues,

I am writing today to update you on the progress that we are making on our efforts to take advantage of the developing commercial opportunities presented by Gen AI technology for the benefit of all our artists and songwriters.

I want to address three specific topics:

Responsible Gen AI company and product agreements; How our artists can participate; and What we are doing to encourage responsible AI public policies.

UMG is playing a pioneering role in fostering AI’s enormous potential. While our progress is significant, the speed at which this technology is developing makes it important that you are all continually updated on our efforts and well-versed on the strategy and approach.

The foundation of what we’re doing is the belief that together, we can foster a healthy commercial AI ecosystem in which artists, songwriters, music companies and technology companies can all flourish together.

NEW AGREEMENTS

To explore the varied opportunities and determine the best approaches, we have been working with AI developers to put their ideas to the test. In fact, we were the first company to enter into AI-related agreements with companies ranging from major platforms such as YouTube, TikTok and Meta to emerging entrepreneurs such as BandLab, Soundlabs, and more. Both creatively and commercially our portfolio of AI partnerships continues to expand.

Very recently, Universal Music Japan announced an agreement with KDDI, a leading Japanese telecommunications company, to develop new music experiences for fans and artists using Gen AI. And we are very actively engaged with nearly a dozen different companies on significant new products and service plans that hold promise for a dramatic expansion of the AI music landscape. Further, we’re seeing other related advancements. While just scratching the surface of AI’s enormous potential, Spotify’s recent integration with ChatGPT offers a pathway to move fluidly from query and discovery to enjoyment of music—and all within a monetized ecosystem.

HOW OUR ARTISTS CAN PARTICIPATE

Based on what we’ve done with our AI partners to date, and the new discussions that are underway, we can unequivocally say that AI has the potential to deliver creative tools that will enable us to connect our artists with their fans in new ways—and with advanced capability on a scale we’ve never encountered.

Further, I believe that Agentic AI, which dynamically employs complex reasoning and adaptation, has the potential to revolutionize how fans interact with and discover music.

I know that we will successfully navigate as well as seize these opportunities and that these new products could constitute a significant source of new future revenue for artists and songwriters.

We will be actively engaged in discussing all of these developments with the entire creative community.

While some of the biggest opportunities will require further exploration, we are excited by the compelling AI models we’re seeing emerge.

We will only consider advancing AI products based on models that are trained responsibly. That is why we have entered into agreements with AI developers such as ProRata and KLAY, among others, and are in discussions with numerous additional like-minded companies whose products provide accurate attribution and tools which empower and compensate artists—products that both protect music and enhance its monetization.

And to be clear—and this is very important—we will NOT license any model that uses an artist’s voice or generates new songs which incorporate an artist’s existing songs without their consent.

New AI products will be joined by many other similar ones that will soon be coming to market, and we have established teams throughout UMG that will be working with artists and their representatives to bring these opportunities directly to them.

RESPONSIBLE PUBLIC POLICIES COVERING AI

We remain acutely aware of the fact that large and powerful AI companies are pressuring governments around the world to legitimize the training of AI technology on copyrighted material without owner consent or compensation, among other proposals.

To be clear: all these misguided proposals amount to nothing more than the unauthorized (and, we believe, illegal) exploitation of the rights and property of creative artists.

In addition, we are acting in the marketplace to see our partners embrace responsible and ethical AI policies and we’re proud of the progress being made there. For example, having accurately predicted the rapid rise of AI “slop” on streaming platforms, in 2023 we introduced Artist-Centric principles to combat what is essentially platform pollution. Since then, many of our platform partners have made significant progress in putting in place measures to address the diversion of royalties, infringement and fraud—all to the benefit of the entire music ecosystem.

We commend our partners for taking action to address this urgent issue, consistent with our Artist-Centric approach. Further, we recently announced an agreement with SoundPatrol, a new company led by Stanford scientists that employs patented technology to protect artists’ work from unauthorized use in AI music generators.

We are confident that by displaying our willingness as a community to embrace those commercial AI models which value and enhance human artistry, we are demonstrating that market-based solutions promoting innovation are the answer.

LEADING THE WAY FORWARD

So, as we work to assure safeguards for artists, we will help lead the way forward, which is why we are exploring and finding innovative ways to use this revolutionary technology to create new commercial opportunities for artists and songwriters while simultaneously aiding and protecting human creativity.

I’m very excited about the products we’re seeing and what the future holds. I will update you all further on our progress.

Lucian

Mr. Grainge’s position reframes the conversation from “Can we scrape?” to How do we get consent and compensate? That shift matters because AI that clones voices or reconstitutes catalog works is not a neutral utility—it’s a market participant competing with human creators and the rights they rely on.

If everything is “transformative” then nothing is protected—and that guts not just copyright, but artists’ name–image–likeness (NIL), right of publicity and in some jurisdictions, moral rights. A scrape-first, justify-later posture erases ownership, antagonizes creators living and dead, and makes catalogs unpriceable. Why would Universal—or any other rightsholder—partner with a company that treats works and identity as free training fuel? What’s great about Lucian’s statement is he’s putting a flag in the ground: the industry leader will not do business with bad actors, regardless of the consequences.

What This Means in Practice

  1. Consent as the gate. Voice clones and “new songs” derived from existing songs require affirmative artist approval—full stop.
  2. Provenance as the standard. AI firms that want first-party deals must prove lawful ingestion, audited datasets, and enforceable guardrails against impersonation.
  3. Aligned incentives. Where consent exists, there’s room for discovery tools, creator utilities, and new revenue streams; where it doesn’t, there’s no deal.

Watermarks and “AI-generated” labels don’t cure false endorsement, right-of-publicity violations, or market substitution. Platforms that design, market, or profit from celebrity emulation without consent aren’t innovating—they’re externalizing legal and ethical risk onto artists.

Moral Rights: Why This Resonates Globally

Universal’s consent-first stance will resonate in moral-rights jurisdictions where authors and performers hold inalienable rights of attribution and integrity (e.g., France’s droit moral, Germany’s Urheberpersönlichkeitsrecht). AI voice clones and “sound-alike” outputs can misattribute authorship, distort a creator’s artistic identity, or subject their work to derogatory treatment—classic moral-rights harms. Because many countries recognize post-mortem moral rights and performers’ neighboring rights, the “no consent, no license” rule is not just good governance—it’s internationally compatible rights stewardship.

Industry Leadership vs. the “Opt-Out” Mirage

It is absolutely critical that the industry leader actively opposes the absurd “opt-out” gambit and other sleights of hand Big Technocrats are pushing to drive a Mack truck through so-called text-and-data-mining loopholes. Their playbook is simple: legitimize mass training on copyrighted works first, then dare creators to find buried settings or after-the-fact exclusions. That flips property rights on their head and is essentially a retroactive safe harbor,

As Mr. Grainge notes, large AI companies are pressuring governments to bless training on copyrighted material without owner consent or compensation. Those proposals amount to the unauthorized—and unlawful—exploitation of artists’ rights and property. By refusing to play along, Universal isn’t just protecting its catalog; it’s defending the baseline principle that creative labor isn’t scrapable.

Consent or Nothing

Let’s be honest: if AI labs were serious about licensing, we wouldn’t have come one narrow miss away from a U.S. state law AI moratorium triggered by their own overreach. That wasn’t just a safe harbor for copyright infringement, that was a safe harbor for everything from privacy, to consumer protection, to child exploitation, to everything. That’s why it died 99-1 in the Senate, but it was a close run thing,,

And realize, that’s exactly what they want when they are left to their own devices, so to speak. The “opt-out” mirage, the scraping euphemisms, and the rush to codify TDM loopholes all point the same direction—avoid consent and avoid compensation. Universal’s position is the necessary counterweight: consent-first, provenance-audited, revenue-sharing with artists and songwriters (and I would add nonfeatured artists and vocalists) or no deal. Anything less invites regulatory whiplash, a race-to-the-bottom for human creativity, and a permanent breach of trust with artists and their estates.

Reading between the lines, Mr. Grainge has identified AI as both a compelling opportunity and an existential crisis. Let’s see if the others come with him and stare down the bad guys.

And YouTube is monetizing Sora videos

[This post first appeared on Artist Rights Watch]

@DanMilmo: Top UK artists urge Starmer to protect their work on eve of Trump visit

UK artists including Paul McCartney, Kate Bush and Elton John urged Prime Minister Keir Starmer to protect creators before a UK-US tech pact tied to President Donald Trump’s visit. In a letter, they accuse Labour of blocking transparency rules that would force AI firms to disclose training data and warn proposals enabling training on copyrighted works without permission could let an artist’s life’s work be stolen. Citing human rights documents like the International Covenant on Economic, Social and Cultural Rights, the Berne convention and the European Convention on Human Rights, they frame the issue as a human-rights breach. Peer Beeban Kidron criticised US-heavy working groups. Government says no decision yet and promises a report by March. 

Read the post on The Guardian

Senator Josh @HawleyMO Throws Down on Big Tech’s Copyright Theft

 I believe Americans should have the ability to defend their human data, and their rights to that data, against the largest copyright theft in the history of the world. 

Millions of Americans have spent the past two decades speaking and engaging online. Many of you here today have online profiles and writings and creative productions that you care deeply about. And rightly so. It’s your work. It’s you.

What if I told you that AI models have already been trained on enough copyrighted works to fill the Library of Congress 22 times over? For me, that makes it very simple: We need a legal mechanism that allows Americans to freely defend those creations. I say let’s empower human beings by protecting the very human data they create. Assign property rights to specific forms of data, create legal liability for the companies who use that data and, finally, fully repeal Section 230. Open the courtroom doors. Let the people sue those who take their rights, including those who do it using AI.

Third, we must add sensible guardrails to the emergent AI economy and hold concentrated economic power to account. These giant companies have made no secret of their ambitions to radically reshape our economic life. So, we ought to require transparency and reporting each time they replace a working man with a machine.

And the government should inspect all of these frontier AI systems, so we can better understand what the tech titans plan to build and deploy. 

Ultimately, when it comes to guardrails, protecting our children should be our lodestar. You may have seen recently how Meta green-lit its own chatbots to have sensual conversations with children—yes, you heard me right. Meta’s own internal documents permitted lurid conversations that no parent would ever contemplate. And most tragically, ChatGPT recently encouraged a troubled teenager to commit suicide—even providing detailed instructions on how to do it.

We absolutely must require and enforce rigorous technical standards to bar inappropriate or harmful interactions with minors. And we should think seriously about age verification for chatbots and agents. We don’t let kids drive or drink or do a thousand other harmful things. The same standards should apply to AI.

Fourth and finally, while Congress gets its act together to do all of this, we can’t kneecap our state governments from moving first. Some of you may have seen that there was a major effort in Congress to ban states from regulating AI for 10 years—and a whole decade is an eternity when it comes to AI development and deployment. This terrible policy was nearly adopted in the reconciliation bill this summer, and it could have thrown out strong anti-porn and child online safety laws, to name a few. Think about that: conservatives out to destroy the very concept of federalism that they cherish … all in the name of Big Tech. Well, we killed it on the Senate floor. And we ought to make sure that bad idea stays dead.

We’ve faced technological disruption before—and we’ve acted to make technology serve us, the people. Powered flight changed travel forever, but you can’t land a plane on your driveway. Splitting the atom fundamentally changed our view of physics, but nobody expects to run a personal reactor in their basement. The internet completely recast communication and media, but YouTube will still take down your video if you violate a copyright. By the same token, we can—and we should—demand that AI empower Americans, not destroy their rights . . . or their jobs . . . or their lives.

Don’t forget tomorrow—Artist Rights Roundtable on AI and Copyright at American University in Washington DC

Artist Rights Roundtable on AI and Copyright: 
Coffee with Humans and the Machines     

Join the Artist Rights Institute (ARI) and American University’s Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

This roundtable is particularly timely because both the Bartz and Kadrey rulings expose gaps in author consent, provenance, and fair licensing, underscoring an urgent need for policy, identifiers, and enforceable frameworks to protect creators.

 🗓️ Date: September 18, 2025
🕗 Time: 8:00 a.m. – 12:00 noon
📍 Location: Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016

🎟️ Admission: Free and open to the public. Registration required at Eventbrite. Seating is limited.

🅿️ Parking map is available here. Pay-As-You-Go parking is available in hourly or daily increments ($2/hour, or $16/day) using the pay stations in the elevator lobbies of Katzen Arts Center, East Campus Surface Lot, the Spring Valley Building, Washington College of Law, and the School of International Service

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by KOGOD Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:

  • Dr. Moiya McTier, Senior Advisor, Human Artistry Campaign
  • Ryan Lehnning, Assistant General Counsel, International at SoundExchange
  • The Chatbot

Moderator: Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation

  • Speaker: Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:

  • Erin McAnally, Executive Director, Songwriters of North America
  • Jen Jacobsen, Executive Director, Artist Rights Alliance
  • Josh Hurvitz, Partner, NVG and Head of Advocacy for A2IM
  • Kevin Amer, Chief Legal Officer, The Authors Guild

Moderator: Linda Bloss-Baum, Director, Business and Entertainment Program, KOGOD School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

  • Speaker: George York, SVP, International Policy Recording Industry Association of America

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch this space and visit Eventbrite for updates and speaker announcements.

Senator Cruz Joins the States on AI Safe Harbor Collapse— And the Moratorium Quietly Slinks Away

Silicon Valley Loses Bigly

In a symbolic vote that spoke volumes, the U.S. Senate decisively voted 99–1 to strike the toxic AI safe harbor moratorium from the vote-a-rama for the One Big Beautiful Bill Act (HR 1) according to the AP. Senator Ted Cruz, who had previously actively supported the measure, actually joined the bipartisan chorus in stripping it — an acknowledgment that the proposal had become politically radioactive.

To recap, the AI moratorium would have barred states from regulating artificial intelligence for up to 10 years, tying access to broadband and infrastructure funds to compliance. It triggered an immediate backlash: Republican governors, state attorneys general, parents’ groups, civil liberties organizations, and even independent artists condemned it as a blatant handout to Big Tech with yet another rent-seeking safe harbor.

Marsha Blackburn and Maria Cantwell to the Rescue

Credit where it’s due: Senator Marsha Blackburn (R–TN) was the linchpin in the Senate, working across the aisle with Sen. Maria Cantwell to introduce the amendment that finally killed the provision. Blackburn’s credibility with conservative and tech-wary voters gave other Republicans room to move — and once the tide turned, it became a rout. Her leadership was key to sending the signal to her Republican colleagues–including Senator Cruz–that this wasn’t a hill to die on.

Top Cover from President Trump?

But stripping the moratorium wasn’t just a Senate rebellion. This kind of reversal in must-pass, triple whip legislation doesn’t happen without top cover from the White House, and in all likelihood, Donald Trump himself. The provision was never a “last stand” issue in the art of the deal. Trump can plausibly say he gave industry players like Masayoshi Son, Meta, and Google a shot, but the resistance from the states made it politically untenable. It was frankly a poorly handled provision from the start, and there’s little evidence Trump was ever personally invested in it. He certainly didn’t make any public statements about it at all, which is why I always felt it was such an improbable deal point that it was always intended as a bargaining chip whether the staff knew it or not.

One thing is for damn sure–it ain’t coming back in the House which is another way you know you can stick a fork in it despite the churlish shillery types who are sulking off the pitch.

One final note on the process: it’s unfortunate that the Senate Parliamentarian made such a questionable call when she let the AI moratorium survive the Byrd Bath, despite it being so obviously not germane to reconciliation. The provision never should have made it this far in the first place — but oh well. Fortunately, the Senate stepped in and did what the process should have done from the outset.

Now what?

It ain’t over til it’s over. The battle with Silicon Valley may be over on this issue today, but that’s not to say the war is over. The AI moratorium may reappear, reshaped and rebranded, in future bills. But its defeat in the Senate is important. It proves that state-level resistance can still shape federal tech policy, even when it’s buried in omnibus legislation and wrapped in national security rhetoric.

Cruz’s shift wasn’t a betrayal of party leadership — it was a recognition that even in Washington, federalism still matters. And this time, the states — and our champion Marsha — held the line. 

Brava, madam. Well played.

This post first appeared on MusicTechPolicy

@human_artistry Campaign Letter Opposing AI Safe Harbor Moratorium in Big Beautiful Bill HR 1

Artist Rights Institute is pleased to support the Human Artistry Campaign’s letter to Senators Thune and Schumer opposing the AI safe harbor in the One Big Beautiful Bill Act. ARI joins with:

Opposition is rooted in the most justifiable reasons:

By wiping dozens of state laws off the books, the bill would undermine public safety, creators’ rights, and the ability of local communities to protect themselves from a fast-moving technology that is being rushed to the market by tech giants. State laws protecting people from invasive AI deepfakes would be at risk, along with a range of proposals designed to eliminate discrimination and bias in AI. For artists and creators, preempting state laws requiring Big tech to disclose the material they used to train their models, often to create new products that compete with the human creators’ originals, would make it difficult or impossible to prove this theft has occurred. As the Copyright Office’s Fair Use Report recently reaffirmed, many forms of this conduct are illegal under longstanding federal law. 

The moratorium is so vague that it is unclear whether it would actually prohibit states from addressing construction of data centers or the vast drain on the power grid to implement AI placement in states. This is a safe harbor on steroids and terrible for all creators.

Martina McBride’s Plea for Artist Protection from AI Met with a Congressional Sleight of Hand

This week, country music icon Martina McBride poured her heart out before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Her testimony in support of the bipartisan NO FAKES Act was raw, earnest, and courageous. Speaking as an artist, a mother, and a citizen, she described the emotional weight of having her voice—one that has offered solace and strength to survivors of domestic violence—exploited by AI systems to peddle messages she would never endorse. Her words echoed through the chamber with moral clarity: “Give me the tools to stop that kind of betrayal.”

The NO FAKES Act aims to create a federal property right over an individual’s name, image, and likeness (NIL), offering victims of AI-generated deepfakes a meaningful path to justice. The bill has drawn bipartisan support and commendation from artists’ rights advocates, child protection organizations, and even some technology companies. It represents a sincere attempt to preserve human dignity in the age of machine mimicry.

And yet, while McBride testified in defense of authenticity and integrity, Congress was quietly advancing legislation that was the opposite.

At the same time her testimony was being heard, lawmakers were moving forward with a massive federal budget package ironically called the “Big Beautiful Bill” that includes an AI safe harbor moratorium—a sweeping provision that would strip states of their ability to enforce NIL protections against AI through existing state laws. The so-called “AI Safe Harbor” effectively immunizes AI developers from accountability under most current state-level right-of-publicity and privacy laws, not to mention wire fraud, wrongful death and RICO. It does so in the name of “innovation,” but at the cost of silencing local democratic safeguards and creators of all categories.

Worse yet, the economic scoring of the “Big Beautiful Bill” is based on economic assumptions that rely on productivity gains from AI ripping off all creators from grandma’s baby pictures to rock stars.

The irony is devastating. Martina McBride’s call for justice was sincere and impassioned. But the AI moratorium hanging over the very same legislative session would make it harder—perhaps impossible—for states like Florida, Tennessee, Texas, or California to shield their citizens from the very abuses McBride described. The same Congress that applauded her courage is in the process of handing Silicon Valley a blank check to continue the vulpine lust of its voracious scraping and synthetic exploitation of human expression.

This is not just hypocrisy; it’s the personification of Washington’s two-faced AI policy. On one hand, ceremonial hearings and soaring rhetoric. On the other, buried provisions that serve the interests of the most powerful AI platforms in the world. Oh, and the AI platforms also wrote themselves into the pork fest for $500,000,000 of taxpayers money (more likely debt) for “AI modernization” whatever that is. At a time that the bond market is about to dump all over the U.S. economy. Just another day in the Imperial City.

Let’s be honest: the AI safe harbor moratorium isn’t about protecting innovation. It’s about protecting industrialized theft. It codifies a grotesque and morbid fascination with digital kleptomania—a fetish for the unearned, the repackaged, the replicated.

In that sense, the AI Safe Harbor doesn’t just threaten artists. It perfectly embodies the twisted ethos of modern Silicon Valley, a worldview most grotesquely illustrated by the image of a drooling Sam Altman—the would-be godfather of generative AI—salivating over the limitless data he believes he has a divine right to mine.

Martina McBride called for justice. Congress listened politely. And then gave her to the wolves.

They have a chance to make it right—starting with stripping the radical and extreme safe harbor from the “Big Beautiful Bill.”

[This post first appeared on MusicTechPolicy]

Massive State Opposition to AI Regulation Safe Harbor Moratorium in the ‘One Big Beautiful Bill Act’ (H. Con. Res. 14)

As of this morning, the loathsome AI Safe Harbor is still in the ‘One Big Beautiful Bill Act’ (H. Con. Res. 14) as far as we can tell. There is supposed to be a “manager’s amendment” released this morning yet which will include any changes that were made in last night’s late night session. Watch this space at House Rules for when that managers amendment is released. You will be looking for Section 43201 around page 292. As far as we can tell, the safe harbor is still in there, which is not surprising as it is coming from David Sacks the Silicon Valley Viceroy and White House Crypto Czar.

But the safe harbor—a 10-year moratorium on state and local regulation of artificial intelligence (AI)—has ignited significant opposition from a broad coalition of state officials, lawmakers, and organizations. As you would expect, opponents argue that this measure would undermine existing protections and hinder the ability of states to address AI-related harms. Despite the fact that it was snuck through in the middle of the night, opposition is increasing all the time, but we cannot relent for a moment as Silicon Valley is at it again and wants to hang an AI safe harbor in the lobbyists hunting lodge, right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

State-Level Opposition

A bipartisan group of 40 state attorneys general have voiced strong opposition through NAAG to the AI regulation moratorium. In a letter to Congress, they emphasized that the moratorium would disrupt hundreds of measures being both considered by state legislatures and those that have already passed in states led by Republicans and Democrats. They argue that, in the absence of comprehensive federal AI legislation, states have been at the forefront of protecting consumers.

Organizational Opposition

Beyond state officials, a coalition of 141 organizations—including unions, advocacy groups, non-profits, and academic institutions—have expressed alarm over the proposed safe harbor moratorium. In a letter to Congressional leaders, they warned that the provision could lead to unfettered abuse of AI technologies, undermining critical safeguards such as civil rights protections, privacy standards, and accountability for harmful AI applications.

Notable organizations opposing the moratorium include:

  • Alphabet Workers Union
  • Amazon Employees for Climate Justice
  • Mozilla
  • American Federation of Teachers
  • Center for Democracy and Technology 

We don’t ask you pick up the phone and call your Congress representative very often, but this is one of those times. If you’re not sure who your representatives are, you can go to the House of Representatives website here and look at the upper right hand corner for this box:

You can also go to the 5calls webpage opposing the safe harbor moratorium which is here. They have developed some collateral and talking points for you to draw on if you like.

This is a big damn deal. Let’s get it done. We’ve all done it before, let’s do it again.

@Artist Rights Institute Newsletter 3/24/25

The Artist Rights Institute’s news digest Newsletter

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

Songwriters and Union Organizing

RICO and Criminal Copyright Infringement

AI Piracy

@alexreisner: Search LibGen, the Pirated-Books Database That Meta Used to Train AI (Alex Reisner/The Atlantic)

OpenAI and Google’s Dark New Campaign to Dismantle Artists’ Protections (Brian Merchant/Blood in the Machine)

Alden newspapers slam OpenAI, Google’s AI proposals (Sara Fischer/Axios)

AI Litigation

French Publishers and Authors Sue Meta over Copyright Works Used in AI Training (Kelvin Chan/AP)

DC Circuit Affirms Human Authorship Required for Copyright (David Newhoff/The Illusion of More)

OpenAI Asks White House for Relief From State AI Rules (Jackie Davalos/Bloomberg)

Microsoft faces FTC antitrust probe over AI and licensing practices (Prasanth Aby Thomas/Computer World)

Google and its Confederate AI Platforms Want Retroactive Absolution for AI Training Wrapped in the American Flag(Chris Castle/MusicTechPolicy)

AI and Human Rights

Human Rights and AI Opt Out (Chris Castle/MusicTechPolicy)