Gene Simmons and the American Music Fairness Act

Gene Simmons is receiving Kennedy Center Honors with KISS this Sunday, and is also bringing his voice to the fair pay for radio play campaign to pass the American Music Fairness Act (AMFA).

Gene will testify on AMFA next week before the Senate Judiciary Committee. He won’t just be speaking as a member of KISS or as one of the most recognizable performers in American music. He’ll be showing up as a witness to something far more universal: the decades-long exploitation of recording artists whose work powers an entire broadcast industry and that has never paid them a dime. Watch Gene’s hearing on December 9th at 3pm ET at this link, when Gene testifies alongside SoundExchange CEO Mike Huppe.

As Gene argued in his Washington Post op-ed, the AM/FM radio loophole is not a quirky relic, it is legalized taking. Everyone else pays for music: streaming services, satellite radio, social-media platforms, retail, fitness, gaming. Everyone except big broadcast radio, which generated more than $13 billion in advertising revenue last year while paying zero to the performers whose recordings attract those audiences.

Gene is testifying not just for legacy acts, but for the “thousands of present and future American recording artists” who, like KISS in the early days, were told to work hard, build a fan base, and just be grateful for airplay. As he might put it, artists were expected to “rock and roll all night” — but never expect to be paid for it on the radio.

And when artists asked for change, they were told to wait. They “keep on shoutin’,” decade after decade, but Congress never listened.

That’s why this hearing matters. It’s the first Senate-level engagement with the issue since 2009. The ground is shifting. Gene Simmons’ presence signals something bigger: artists are done pretending that “exposure” is a form of compensation.

AMFA would finally require AM/FM broadcasters to pay for the sound recordings they exploit, the same way every other democratic nation already does. It would give session musicians, backup vocalists, and countless independent artists a revenue stream they should have had all along. It would even unlock international royalties currently withheld from American performers because the U.S. refuses reciprocity.

And let’s be honest: Gene Simmons is an ideal messenger. He built KISS from nothing, understands the grind, and knows exactly how many hands touch a recording before it reaches the airwaves. His testimony exposes the truth: radio isn’t “free promotion” — it’s a commercial business built on someone else’s work.

Simmons once paraphrased the music economy as a game where artists are expected to give endlessly while massive corporations act like the only “god of thunder,” taking everything and returning nothing. AMFA is an overdue correction to that imbalance.

When Gene sits down before the Senate Judiciary Committee, he won’t be wearing the makeup. He won’t need to. He’ll be carrying something far more powerful: the voices of artists who’ve waited 80 years for Congress to finally turn the volume up on fairness.

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

Don’t Let Congress Reward the Stations That Don’t Pay Artists

As we’ve been posting about for years—alongside Blake Morgan and the #IRespectMusic movement that you guys have been so good about supporting—there’s still a glaring failure at the heart of U.S. copyright law: performing artists and session musicians receive no royalty for AM/FM radio airplay. Every other developed country (and practically every other country) compensates performers for broadcast use, yet the United States continues to exempt terrestrial radio from paying the people who record the music.

Now Congress is preparing to pass the AM Radio in Every Car Act, a massive government intervention that would literally install the instrument of unfairness into every new car at significant cost to consumers. It’s a breathtaking example of how far the National Association of Broadcasters (NAB) will go to preserve its century-old free ride—by lobbying for public subsidies while refusing to pay artists a penny. This isn’t public service; it’s policy cruelty dressed up as nostalgia.

Hundreds of artists have already spoken out in a letter to Congress demanding fairness through the American Music Fairness Act (AMFA). Their action matters—and yours does too.

👉 Here’s what you can do:

Don’t let Washington hard-wire injustice into every dashboard. Demand that Congress fix the problem before it funds the next generation of unfairness.

Dear Speaker Johnson, Leader Jeffries, Leader Thune, and Leader Schumer:

Earlier this year, we wrote urging that you take action on the American Music Fairness Act (S.253/H.R.791), legislation that will require that AM/FM radio companies start paying artists for their music. We are grateful for your attention to ensuring America’s recording artists are finally paid for use of our work.

As you may know, some members of Congress are currently seeking to pass legislation that will require every new vehicle manufactured in the United States come pre-installed with AM radio. The passage of the AM Radio for Every Vehicle Act (S.315/H.R.979) would mark another major windfall for the corporate radio industry that makes $13.6 billion each year in advertising revenue while refusing to compensate the performers whose songs play 240 million times each year on AM radio stations. Every year, recording artists lose out on hundreds of millions of dollars in royalties in the U.S. and abroad because of this hundred-year-old loophole.

This is wrong. In the United States of America, every person deserves to be paid for the use of their work. But because of the power held by giant radio corporations in Washington, artists, both big and small, continue to be overlooked, even as every other music delivery platform, including streaming services and satellite radio, pays both the songwriter and performer.

We are asking today that you insist that any legislation that includes the AM Radio for Every Vehicle Act also include the American Music Fairness Act. We do not oppose terrestrial radio. In fact, we appreciate the role that radio has played in our careers and within society, but the 100-year-old argument of promotion that radio continues to hide behind does not ring true in 2025.

When you save the radio industry by mandating its technology remain in cars, we ask that you save the musician too and allow us to be paid fairly when our music is played.

Thank you again for your consideration of this much-needed legislation.

Sincerely,

Barry Manilow

Boyz II Men

Carole King

Cyndi Lauper

Debbie Gibson

Def Leppard

Gloria Gaynor

Kool and the Gang

Lee Ann Womack

Lil Jon

Mike Love

Nancy Wilson

Peter Frampton

Sammy Hagar

Smokey Robinson

TLC

“You don’t need to train on novels and pop songs to get the benefits of AI in science” @ednewtonrex


You Don’t Need to Steal Art to Cure Cancer: Why Ed Newton-Rex Is Right About AI and Copyright

Ed Newton-Rex said the quiet truth out loud: you don’t need to scrape the world’s creative works to build AI that saves lives. Or even beat the Chinese Communist Party.

It’s a myth that AI “has to” ingest novels and pop lyrics to learn language. Models acquire syntax, semantics, and pragmatics from any large, diverse corpus of natural language. That includes transcribed speech, forums, technical manuals, government documents, Wikipedia, scientific papers, and licensed conversational data. Speech systems learn from audio–text pairs, not necessarily fiction; text models learn distributional patterns wherever language appears. Of course, literary works can enrich style, but they’re not necessary for competence: instruction tuning, dialogue data, and domain corpora yield fluent models without raiding copyrighted art. In short, creative literature is optional seasoning, not the core ingredient for teaching machines to “speak.”

Google’s new cancer-therapy paper proves the point. Their model wasn’t trained on novels, lyrics, or paintings. It was trained responsibly on scientific data. And yet it achieved real, measurable progress in biomedical research. That simple fact dismantles one of Silicon Valley’s most persistent myths: that copyright is somehow an obstacle to innovation.

You don’t need to train on Joni Mitchell to discover a new gene pathway. You don’t need to ingest John Coltrane to find a drug target. AI used for science can thrive within the guardrails of copyright because science itself already has its own open-data ecosystems—peer-reviewed, licensed, and transparent.

The companies like Anthropic and Meta insisting that “fair use” covers mass ingestion of stolen creative works aren’t curing diseases; they’re training entertainment engines. They’re ripping off artists’ livelihoods to make commercial chatbots, story generators, and synthetic-voice platforms designed to compete against the very creators whose works they exploited. That’s not innovation—it’s market capture through appropriation.

They do it for reasons old as time—they do it for the money.

The ethical divide is clear:

  • AI for discovery builds on licensed scientific data.
  • AI for mimicry plunders culture to sell imitation.

We should celebrate the first and regulate the second. Upholding copyright and requiring provenance disclosures doesn’t hinder progress—it restores integrity. The same society that applauds AI in medical breakthroughs can also insist that creative industries remain human-centered and law-abiding. Civil-military fusion doesn’t imply that there’s only two ingredients in the gumbo of life.

If Google can advance cancer research without stealing art, so can everyone else and so can Google keep different rules for the entertainment side of their business or investment portfolio. The choice isn’t between curing cancer and protecting artists—it’s between honesty and opportunism. The repeated whinging of AI labs about “because China” would be a lot more believable if they used their political influence to get the CCP to release Hong Kong activist Jimmy Lai from stir. We can join Jimmy and his amazingly brave son Sebastian and say “because China”, too. #FreeJimmyLai

Sir Lucian Grainge Just Drew the Brightest Line Yet on AI

by Chris Castle

Universal Music Group’s CEO Sir Lucian Grainge has put the industry on notice in an internal memo to Universal employees: UMG will not license any AI model that uses an artist’s voice—or generates new songs incorporating an artist’s existing songs—without that artist’s consent. This isn’t just a slogan; it’s a licensing policy, an advocacy position, and a deal-making leverage all rolled into one. After the Sora 2 disaster, I have to believe that OpenAI is at the top of the list.

Here’s the memo:

Dear Colleagues,

I am writing today to update you on the progress that we are making on our efforts to take advantage of the developing commercial opportunities presented by Gen AI technology for the benefit of all our artists and songwriters.

I want to address three specific topics:

Responsible Gen AI company and product agreements; How our artists can participate; and What we are doing to encourage responsible AI public policies.

UMG is playing a pioneering role in fostering AI’s enormous potential. While our progress is significant, the speed at which this technology is developing makes it important that you are all continually updated on our efforts and well-versed on the strategy and approach.

The foundation of what we’re doing is the belief that together, we can foster a healthy commercial AI ecosystem in which artists, songwriters, music companies and technology companies can all flourish together.

NEW AGREEMENTS

To explore the varied opportunities and determine the best approaches, we have been working with AI developers to put their ideas to the test. In fact, we were the first company to enter into AI-related agreements with companies ranging from major platforms such as YouTube, TikTok and Meta to emerging entrepreneurs such as BandLab, Soundlabs, and more. Both creatively and commercially our portfolio of AI partnerships continues to expand.

Very recently, Universal Music Japan announced an agreement with KDDI, a leading Japanese telecommunications company, to develop new music experiences for fans and artists using Gen AI. And we are very actively engaged with nearly a dozen different companies on significant new products and service plans that hold promise for a dramatic expansion of the AI music landscape. Further, we’re seeing other related advancements. While just scratching the surface of AI’s enormous potential, Spotify’s recent integration with ChatGPT offers a pathway to move fluidly from query and discovery to enjoyment of music—and all within a monetized ecosystem.

HOW OUR ARTISTS CAN PARTICIPATE

Based on what we’ve done with our AI partners to date, and the new discussions that are underway, we can unequivocally say that AI has the potential to deliver creative tools that will enable us to connect our artists with their fans in new ways—and with advanced capability on a scale we’ve never encountered.

Further, I believe that Agentic AI, which dynamically employs complex reasoning and adaptation, has the potential to revolutionize how fans interact with and discover music.

I know that we will successfully navigate as well as seize these opportunities and that these new products could constitute a significant source of new future revenue for artists and songwriters.

We will be actively engaged in discussing all of these developments with the entire creative community.

While some of the biggest opportunities will require further exploration, we are excited by the compelling AI models we’re seeing emerge.

We will only consider advancing AI products based on models that are trained responsibly. That is why we have entered into agreements with AI developers such as ProRata and KLAY, among others, and are in discussions with numerous additional like-minded companies whose products provide accurate attribution and tools which empower and compensate artists—products that both protect music and enhance its monetization.

And to be clear—and this is very important—we will NOT license any model that uses an artist’s voice or generates new songs which incorporate an artist’s existing songs without their consent.

New AI products will be joined by many other similar ones that will soon be coming to market, and we have established teams throughout UMG that will be working with artists and their representatives to bring these opportunities directly to them.

RESPONSIBLE PUBLIC POLICIES COVERING AI

We remain acutely aware of the fact that large and powerful AI companies are pressuring governments around the world to legitimize the training of AI technology on copyrighted material without owner consent or compensation, among other proposals.

To be clear: all these misguided proposals amount to nothing more than the unauthorized (and, we believe, illegal) exploitation of the rights and property of creative artists.

In addition, we are acting in the marketplace to see our partners embrace responsible and ethical AI policies and we’re proud of the progress being made there. For example, having accurately predicted the rapid rise of AI “slop” on streaming platforms, in 2023 we introduced Artist-Centric principles to combat what is essentially platform pollution. Since then, many of our platform partners have made significant progress in putting in place measures to address the diversion of royalties, infringement and fraud—all to the benefit of the entire music ecosystem.

We commend our partners for taking action to address this urgent issue, consistent with our Artist-Centric approach. Further, we recently announced an agreement with SoundPatrol, a new company led by Stanford scientists that employs patented technology to protect artists’ work from unauthorized use in AI music generators.

We are confident that by displaying our willingness as a community to embrace those commercial AI models which value and enhance human artistry, we are demonstrating that market-based solutions promoting innovation are the answer.

LEADING THE WAY FORWARD

So, as we work to assure safeguards for artists, we will help lead the way forward, which is why we are exploring and finding innovative ways to use this revolutionary technology to create new commercial opportunities for artists and songwriters while simultaneously aiding and protecting human creativity.

I’m very excited about the products we’re seeing and what the future holds. I will update you all further on our progress.

Lucian

Mr. Grainge’s position reframes the conversation from “Can we scrape?” to How do we get consent and compensate? That shift matters because AI that clones voices or reconstitutes catalog works is not a neutral utility—it’s a market participant competing with human creators and the rights they rely on.

If everything is “transformative” then nothing is protected—and that guts not just copyright, but artists’ name–image–likeness (NIL), right of publicity and in some jurisdictions, moral rights. A scrape-first, justify-later posture erases ownership, antagonizes creators living and dead, and makes catalogs unpriceable. Why would Universal—or any other rightsholder—partner with a company that treats works and identity as free training fuel? What’s great about Lucian’s statement is he’s putting a flag in the ground: the industry leader will not do business with bad actors, regardless of the consequences.

What This Means in Practice

  1. Consent as the gate. Voice clones and “new songs” derived from existing songs require affirmative artist approval—full stop.
  2. Provenance as the standard. AI firms that want first-party deals must prove lawful ingestion, audited datasets, and enforceable guardrails against impersonation.
  3. Aligned incentives. Where consent exists, there’s room for discovery tools, creator utilities, and new revenue streams; where it doesn’t, there’s no deal.

Watermarks and “AI-generated” labels don’t cure false endorsement, right-of-publicity violations, or market substitution. Platforms that design, market, or profit from celebrity emulation without consent aren’t innovating—they’re externalizing legal and ethical risk onto artists.

Moral Rights: Why This Resonates Globally

Universal’s consent-first stance will resonate in moral-rights jurisdictions where authors and performers hold inalienable rights of attribution and integrity (e.g., France’s droit moral, Germany’s Urheberpersönlichkeitsrecht). AI voice clones and “sound-alike” outputs can misattribute authorship, distort a creator’s artistic identity, or subject their work to derogatory treatment—classic moral-rights harms. Because many countries recognize post-mortem moral rights and performers’ neighboring rights, the “no consent, no license” rule is not just good governance—it’s internationally compatible rights stewardship.

Industry Leadership vs. the “Opt-Out” Mirage

It is absolutely critical that the industry leader actively opposes the absurd “opt-out” gambit and other sleights of hand Big Technocrats are pushing to drive a Mack truck through so-called text-and-data-mining loopholes. Their playbook is simple: legitimize mass training on copyrighted works first, then dare creators to find buried settings or after-the-fact exclusions. That flips property rights on their head and is essentially a retroactive safe harbor,

As Mr. Grainge notes, large AI companies are pressuring governments to bless training on copyrighted material without owner consent or compensation. Those proposals amount to the unauthorized—and unlawful—exploitation of artists’ rights and property. By refusing to play along, Universal isn’t just protecting its catalog; it’s defending the baseline principle that creative labor isn’t scrapable.

Consent or Nothing

Let’s be honest: if AI labs were serious about licensing, we wouldn’t have come one narrow miss away from a U.S. state law AI moratorium triggered by their own overreach. That wasn’t just a safe harbor for copyright infringement, that was a safe harbor for everything from privacy, to consumer protection, to child exploitation, to everything. That’s why it died 99-1 in the Senate, but it was a close run thing,,

And realize, that’s exactly what they want when they are left to their own devices, so to speak. The “opt-out” mirage, the scraping euphemisms, and the rush to codify TDM loopholes all point the same direction—avoid consent and avoid compensation. Universal’s position is the necessary counterweight: consent-first, provenance-audited, revenue-sharing with artists and songwriters (and I would add nonfeatured artists and vocalists) or no deal. Anything less invites regulatory whiplash, a race-to-the-bottom for human creativity, and a permanent breach of trust with artists and their estates.

Reading between the lines, Mr. Grainge has identified AI as both a compelling opportunity and an existential crisis. Let’s see if the others come with him and stare down the bad guys.

And YouTube is monetizing Sora videos

[This post first appeared on Artist Rights Watch]

Artist Rights Are Innovation, Too! White House Opens AI Policy RFI and Artists Should Be Heard

The White House has opened a major Request for Information (RFI) on the future of artificial intelligence regulation — and anyone can submit a comment. That means you. This is not just another government exercise. It’s a real opportunity for creators, musicians, songwriters, and artists to make their voices heard in shaping the laws that will govern AI and its impact on culture for decades to come.

Too often, artists find out about these processes after the decisions are already made. This time, we don’t have to be left out. The comment period is open now, and you don’t need to be a lawyer or a lobbyist to participate — you just need to care about the future of your work and your rights. Remember—property rights are innovation, too, just ask Hernando de Soto (Mystery of Capital) or any honest economist.

Here are four key issues in the RFI that matter deeply to artists — and why your voice is critical on each:


1. Transparency and Provenance: Artists Deserve to Know When Their Work Is Used

One of the most important questions in the RFI asks how AI companies should document and disclose the creative works used to train their models. Right now, most platforms hide behind trade secrets and refuse to reveal what they ingested. For artists, that means you might never know if your songs, photographs, or writing were taken without permission — even if they now power billion-dollar AI products.

This RFI is a chance to demand real provenance requirements: records of what was used, when, and how. Without this transparency, artists cannot protect their rights or seek compensation. A strong public record of support for provenance could shape future rules and force platforms into accountability.


2. Derivative Works and AI Memory: Creativity Shouldn’t Be Stolen Twice

The RFI also raises a subtle but crucial issue: even if companies delete unauthorized copies of works from their training sets, the models still retain and exploit those works in their weights and “memory.” This internal use is itself a derivative work — and it should be treated as one under the law.

Artists should urge regulators to clarify that training outputs and model weights built from copyrighted material are not immune from copyright. This is essential to closing a dangerous loophole: without it, platforms can claim to “delete” your work while continuing to profit from its presence inside their AI systems.


3. Meaningful Opt-Out: Creators Must Control How Their Work Is Used

Another critical question is whether creators should have a clear, meaningful opt-out mechanism that prevents their work from being used in AI training or generation without permission. As Artist Rights Institute and many others have demonstrated, “Robots.txt” disclaimers buried in obscure places are not enough. Artists need a legally enforceable system—not another worthless DMCA-style notice and notice and notice and notice and notice and maybe takedown system that platforms must respect and that regulators can audit.

A robust opt-out system would restore agency to creators, giving them the ability to decide if, when, and how their work enters AI pipelines. It would also create pressure on companies to build legitimate licensing systems rather than relying on theft.


4. Anti-Piracy Rule: National Security Is Not a License to Steal

Finally, the RFI invites comment on how national priorities should shape AI development and it’s vital that artists speak clearly here. There must be a bright-line rule that training AI models on pirated content is never excused by national security or “public interest” arguments. This is a real thing—pirate libraries are clearly front and center in AI litigation which have largely turned into piracy cases because the AI lab “national champions” steal books and everything else.

If a private soldier stole a carton of milk from a chow hall, he’d likely lose his security clearance. Yet some AI companies have built entire models on stolen creative works and now argue that government contracts justify their conduct. That logic is backwards. A nation that excuses intellectual property theft in the name of “security” corrodes the rule of law and undermines the very innovation it claims to protect. On top of it, the truth of the case is that the man Zuckerberg is a thief, yet he is invited to dinner at the White House.

A clear anti-piracy rule would ensure that public-private partnerships in AI development follow the same legal and ethical standards we expect of every citizen — and that creators are not forced to subsidize government technology programs with uncompensated labor. Any “AI champion” who steals should lose or be denied a security clearance.


Your Voice Matters — Submit a Comment

The White House needs to hear directly from creators — not just from tech companies and trade associations. Comments from artists, songwriters, and creative professionals will help shape how regulators understand the stakes and set the boundaries.

You don’t need legal training to submit a comment. Speak from your own experience: how unauthorized use affects your work, why transparency matters, what a meaningful opt-out would look like, and why piracy can never be justified by national security.

👉 Submit your comment here before the October 27 deadline.

@johnpgatta Interviews @davidclowery in Jambands

David Lowery sits down with John Patrick Gatta at Jambands for a wide-ranging conversation that threads 40 years of Camper Van Beethoven and Cracker through the stories behind David’s 3 disc release Fathers, Sons and Brothers and how artists survive the modern music economy. Songwriter rights, road-tested bands, or why records still matter. Read it here.

David Lowery toured this year with a mix of shows celebrating the 40th anniversary of Camper Van Beethoven’s debut, Telephone Free Landslide Victory, duo and band gigs with Cracker, as well as solo dates promoting his recently-released Fathers, Sons and Brothers.

Fathers, the 28-track musical memoir of Lowery’s personal life explored childhood memories, drugs at Disneyland and broken relationships. Of course, it tackles his lengthy career as an indie and major label artist who catalog highlights include the alt-rock classic “Take the Skinheads Bowling” and commercial breakthrough of “Teen Angst” and “Low.” The album works as a selection of songs that encapsulate much of his musical history— folk, country and rock—as well as an illuminating narrative that relates the ups, downs, tenacity, reflection and resolve of more than four decades as a musician.

9/18/25: Save the Date! @ArtistRights Institute and American University Kogod School to host Artist Rights Roundtable on AI and Copyright Sept. 18 in Washington, DC

🎙️ Artist Rights Roundtable on AI and Copyright:  Coffee with Humans and the Machines            

📍 Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016 | 🗓️ September 18, 2025 | 🕗 8:00 a.m. – 12:00 noon

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

Join the Artist Rights Institute (ARI) and Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:
Dr. Moiya McTier Human Artistry Campaign
Ryan Lehnning, Assistant General Counsel, International at SoundExchange
The Chatbot
Moderator Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation, Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:
Erin McAnally, Executive Director, Songwriters of North America
Dr. Richard James Burgess, CEO A2IM
Dr. David C. Lowery, Terry College of Business, University of Georgia.

Moderator: Linda Bloss Baum, Director Business and Entertainment Program, Kogod School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch Eventbrite, this space and visit ArtistRightsInstitute.org for updates and speaker announcements.

@ArtistRights Newsletter 8/18/25: From Jimmy Lai’s show trial in Hong Kong to the redesignation fight over the Mechanical Licensing Collective, this week’s stories spotlight artist rights, ticketing reform, AI scraping, and SoundExchange’s battle with SiriusXM.

Save the Date! September 18 Artist Rights Roundtable in Washington produced by Artist Rights Institute/American University Kogod Business & Entertainment Program. Details at this link!

Artist Rights

JIMMY LAI’S ORDEAL: A SHOW TRIAL THAT SHOULD SHAME THE WORLD (MusicTechPolicy/Chris Castle)

Redesignation of the Mechanical Licensing Collective

Ex Parte Review of the MLC by the Digital Licensee Coordinator

Ticketing

StubHub Updates IPO Filing Showing Growing Losses Despite Revenue Gain (MusicBusinessWorldwide/Mandy Dalugdug)

Lewis Capaldi Concert Becomes Latest Ground Zero for Ticket Scalpers (Digital Music News/Ashley King)

Who’s Really Fighting for Fans? Chris Castle’s Comment in the DOJ/FTC Ticketing Consultation (Artist Rights Watch)

Artificial Intelligence

MUSIC PUBLISHERS ALLEGE ANTHROPIC USED BITTORRENT TO PIRATE COPYRIGHTED LYRICS(MusicBusinessWorldwide/Daniel Tencer)

AI Weather Image Piracy Puts Storm Chasers, All Americans at Risk (Washington Times/Brandon Clemen)

TikTok After Xi’s Qiushi Article: Why China’s Security Laws Are the Whole Ballgame (MusicTechSolutions/Chris Castle)

Reddit Will Block the Internet Archive (to stop AI scraping) (The Verge/Jay Peters) 

SHILLING LIKE IT’S 1999: ARS, ANTHROPIC, AND THE INTERNET OF OTHER PEOPLE’S THINGS(MusicTechPolicy/Chris Castle)

SoundExchange v. SiriusXM

SOUNDEXCHANGE SLAMS JUDGE’S RULING IN SIRIUSXM CASE AS ‘ENTIRELY WRONG ON THE LAW’(MusicBusinessWorldwide/Mandy Dalugdug)

PINKERTONS REDUX: ANTI-LABOR NEW YORK COURT ATTEMPTS TO CUT OFF LITIGATION BY SOUNDEXCHANGE AGAINST SIRIUS/PANDORA (MusicTechPolicy/Chris Castle)

@RickBeato on AI Artists

Is it at thing or is it disco? Our fave Rick Beato has a cautionary tale in this must watch video: AI can mimic but not truly create art. As generative tools get more prevalent, he urges thoughtful curation, artist-centered policies, and an emphasis on emotionally rich, human-driven creativity–also known as creativity. h/t Your Morning Coffee our favorite podcast.