@johnpgatta Interviews @davidclowery in Jambands

David Lowery sits down with John Patrick Gatta at Jambands for a wide-ranging conversation that threads 40 years of Camper Van Beethoven and Cracker through the stories behind David’s 3 disc release Fathers, Sons and Brothers and how artists survive the modern music economy. Songwriter rights, road-tested bands, or why records still matter. Read it here.

David Lowery toured this year with a mix of shows celebrating the 40th anniversary of Camper Van Beethoven’s debut, Telephone Free Landslide Victory, duo and band gigs with Cracker, as well as solo dates promoting his recently-released Fathers, Sons and Brothers.

Fathers, the 28-track musical memoir of Lowery’s personal life explored childhood memories, drugs at Disneyland and broken relationships. Of course, it tackles his lengthy career as an indie and major label artist who catalog highlights include the alt-rock classic “Take the Skinheads Bowling” and commercial breakthrough of “Teen Angst” and “Low.” The album works as a selection of songs that encapsulate much of his musical history— folk, country and rock—as well as an illuminating narrative that relates the ups, downs, tenacity, reflection and resolve of more than four decades as a musician.

9/18/25: Save the Date! @ArtistRights Institute and American University Kogod School to host Artist Rights Roundtable on AI and Copyright Sept. 18 in Washington, DC

🎙️ Artist Rights Roundtable on AI and Copyright:  Coffee with Humans and the Machines            

📍 Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016 | 🗓️ September 18, 2025 | 🕗 8:00 a.m. – 12:00 noon

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

Join the Artist Rights Institute (ARI) and Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:
Dr. Moiya McTier Human Artistry Campaign
Ryan Lehnning, Assistant General Counsel, International at SoundExchange
The Chatbot
Moderator Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation, Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:
Erin McAnally, Executive Director, Songwriters of North America
Dr. Richard James Burgess, CEO A2IM
Dr. David C. Lowery, Terry College of Business, University of Georgia.

Moderator: Linda Bloss Baum, Director Business and Entertainment Program, Kogod School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch Eventbrite, this space and visit ArtistRightsInstitute.org for updates and speaker announcements.

хулиган: Love to Anastasia Dyudyaeva and Alexander Dotsenko

In July 2024, a military court in Saint Petersburg convicted Russian artists Anastasia Dyudyaeva and her husband Alexander Dotsenko on charges of “public calls for terrorism” after they placed anti-war messages—some in Ukrainian, one reading “Putin to the gallows”—on napkins or postcards in a Lenta supermarket. Dyudyaeva received a 3½-year sentence; Dotsenko, three years. They denied wrongdoing, asserting their creative expression was mischaracterized. Their home, which had hosted anti-war exhibitions, was searched, and they were added to Russia’s registry of “terrorists and extremists.” 

Read about it in the Art Newspaper

@ArtistRights Newsletter 8/18/25: From Jimmy Lai’s show trial in Hong Kong to the redesignation fight over the Mechanical Licensing Collective, this week’s stories spotlight artist rights, ticketing reform, AI scraping, and SoundExchange’s battle with SiriusXM.

Save the Date! September 18 Artist Rights Roundtable in Washington produced by Artist Rights Institute/American University Kogod Business & Entertainment Program. Details at this link!

Artist Rights

JIMMY LAI’S ORDEAL: A SHOW TRIAL THAT SHOULD SHAME THE WORLD (MusicTechPolicy/Chris Castle)

Redesignation of the Mechanical Licensing Collective

Ex Parte Review of the MLC by the Digital Licensee Coordinator

Ticketing

StubHub Updates IPO Filing Showing Growing Losses Despite Revenue Gain (MusicBusinessWorldwide/Mandy Dalugdug)

Lewis Capaldi Concert Becomes Latest Ground Zero for Ticket Scalpers (Digital Music News/Ashley King)

Who’s Really Fighting for Fans? Chris Castle’s Comment in the DOJ/FTC Ticketing Consultation (Artist Rights Watch)

Artificial Intelligence

MUSIC PUBLISHERS ALLEGE ANTHROPIC USED BITTORRENT TO PIRATE COPYRIGHTED LYRICS(MusicBusinessWorldwide/Daniel Tencer)

AI Weather Image Piracy Puts Storm Chasers, All Americans at Risk (Washington Times/Brandon Clemen)

TikTok After Xi’s Qiushi Article: Why China’s Security Laws Are the Whole Ballgame (MusicTechSolutions/Chris Castle)

Reddit Will Block the Internet Archive (to stop AI scraping) (The Verge/Jay Peters) 

SHILLING LIKE IT’S 1999: ARS, ANTHROPIC, AND THE INTERNET OF OTHER PEOPLE’S THINGS(MusicTechPolicy/Chris Castle)

SoundExchange v. SiriusXM

SOUNDEXCHANGE SLAMS JUDGE’S RULING IN SIRIUSXM CASE AS ‘ENTIRELY WRONG ON THE LAW’(MusicBusinessWorldwide/Mandy Dalugdug)

PINKERTONS REDUX: ANTI-LABOR NEW YORK COURT ATTEMPTS TO CUT OFF LITIGATION BY SOUNDEXCHANGE AGAINST SIRIUS/PANDORA (MusicTechPolicy/Chris Castle)

@ArtistRights Newsletter 8/11/25: @DavidCLowery on Streaming, SX v. Sirius, AI the Cult and “Dual Use AI” Culture is Upstream of War

Save the Date! September 18 Artist Rights Roundtable in Washington produced by Artist Rights Institute/American University Kogod Business & Entertainment Program. Details at this link!

Artist Rights Institute logo - Artist Rights Weekly newsletter

Save the Date! September 18 Artist Rights Roundtable in Washington produced by Artist Rights Institute/American University Kogod Business & Entertainment Program. Details at this link!

Streaming Economics

@nickgillespie and @davidclowery: Streaming is a Regulated Monopoly (Reason Magazine/Nick Gillespie)

Spotify’s Royalty Threshold Is Conscious Parallelism Reshaping the Music Business—But Not in a Good Way (The Trichordist/Chris Castle)

SoundExchange v. SiriusXM

Did the Court Misread Congress? Rethinking SoundExchange v. SiriusXM Through the Lens of Legislative Design

Copyright Terminations Vetter v. Resnik

Controversial ruling on US termination right fulfills the intention of Congress, say creators (Complete Music Update/Chris Cooke)

Amicus Brief Supporting Cyril Vetter of Artist Rights Institute (David Lowery, Nikki Rowling), Blake Morgan, Abby North, and Angela Rose White (Chris Castle)

Cult of the AI Singularity

AI Frontier Labs and the Singularity as a Modern Prophetic Cult (MusicTech.Solutions/Chris Castle)

AI Czar David Sacks Shortcut to Nowhere: How the Seven Deadly Since Keep Him From Licensing Solutions

Dual Use AI

America Isn’t Ready for the Wars of the Future (Foreign Affairs/GEN Mark Milley and Eric Schmidt)

Spotify CEO Daniel Ek Named Chairman of Military AI Firm Following 600M Investment (Playy Magazine)

Eric Schmidt Is Building the Perfect AI War-Fighting Machine (Wired/Will Knight)

Souls for Sale: The Long Con Behind AI Weapons and Cultural Complicity (MusicTechPolicy/Chris Castle)

Eric Schmidt-led panel pushing for new defense experimentation unit to drive military adoption of generative AI(Defense Scoop/Brandi Vincent)

The Lords of War: Daniel Ek, Eric Schmidt and the Militarization of Tech (MusicTechPolicy/Chris Castle)

Who’s Really Fighting for Fans? Georgia Music Partners Comment in the DOJ/FTC Ticketing Consultation

The Department of Justice and Federal Trade Commission were directed by President Trump to conduct an investigation into ticket scalping pursuant to Executive Order 14254 “Combating Unfair Practices in the Live Entertainment Market.”

This led directly to both agencies inviting public comments on the state of the live event ticketing market—an industry riddled with speculation, opacity, and middlemen who seem to make money without ever attending a show. Over 4000 artists, fans, economists, state attorneys general, and industry veterans all weighed in. And the record reveals something important particularly regarding resellers: there’s a rising consensus that the resellers are engaged in some really shady practices designed for one purpose–to extract as much money as possible from fans and artists without regard to the damage it does to the entire artist-fan relationship.

Today we’re posting Georgia Music Partners’ comment that highlights how unchecked secondary ticketing practices—particularly speculative ticket listings, bot-driven price inflation, deceptive branding, and the resale of restricted tickets—are systematically dismantling the live music ecosystem. These practices strip artists of control, mislead fans, and commoditize the artist-fan relationship for the sole benefit of resellers. The comment urges the DOJ and FTC to treat these behaviors as unfair and deceptive trade practices, enforce the BOTS Act, and distinguish reseller abuse from the separate issues posed by Live Nation case, emphasizing that the artist’s intent and trust with fans must be protected.

FTC Cracks Down on Ticket Scalpers in Major BOTS Act Enforcement

The wheels of justice turn slowly, but they do turn.

In what appears to be a response to NITO’s complaint filed last year with FTC, pressure from Senator Marsha Blackburn and President Trump’s executive order on ticket scalping, Hypebot reports that the Federal Trade Commission is going after large-scale ticket resellers for violating the Better Online Ticket Sales (BOTS) Act (authored by Senators Blackburn and Richard Blumenthal). 

The enforcement action seeks tens of millions of dollars in damages and signals that federal regulators are finally prepared to tackle the systemic abuse of automated tools and deceptive practices in the live event ticketing market.

According to Hypebot, the FTC alleges that the companies used bots and a web of pseudonymous accounts to bypass ticket purchasing limits—snagging prime seats to high-demand concerts and reselling them at inflated prices on platforms like StubHub and SeatGeek. The case represents one of the largest BOTS Act enforcement efforts to date. 

“The FTC is finally doing what artists, managers, and fans have been asking for: holding scalpers accountable,” said Randy Nichols, artist manager for Underoath and advocate for ticketing reform. “This sends a message to bad actors that the days of unchecked resale are numbered.”

As Hypebot reports, this enforcement may just be the beginning. The case is likely to test the limits of the BOTS Act and could set new precedent for what counts as deceptive or unfair conduct in the ticket resale market—even when bots aren’t directly involved.

Read the full story via HypebotFTC Goes After Ticket Scalpers, Seeks Tens of Millions in Damages

United for Artists’ Rights: Amicus Briefs Filed in Vetter v. Resnik Support Global Copyright Termination for Songwriters and Authors: Brief by the National Society of Entertainment & Arts Lawyers

In Vetter v. Resnik, songwriter Cyril Vetter won his trial case in Baton Rouge allowing him to recover worldwide rights in his song “Double Shot of My Baby’s Love” after serving his 35 year termination notice on his former publisher, Resnik Music Group. The publisher appealed. The Fifth Circuit Court of Appeals will hear the case and currently is weighing whether U.S. copyright termination rights include “foreign” territories—a question that strikes at the heart of artists’ ability to reclaim their work worldwide (whatever “foreign” means).

Cyril’s attorney Tim Kappel explains the case if you need an explainer:

An astonishing number of friend of the court briefs were filed by many songwriter groups. We’re going to post them all and today’s brief is by the National Society of Arts & Entertainment Lawyers. The brief argues that the Copyright Act’s plain text and legislative history support a unified, comprehensive termination right that revokes all rights granted in a prior transfer, regardless of geographic scope. It rejects the notion of a “multiverse” of national copyrights, citing international treaties like the Berne Convention and longstanding U.S. policy favoring artist protection. Limiting terminations to U.S. territory, the brief warns, would gut the statute’s purpose, harm artists, and impose impossible burdens on creators seeking to reclaim their rights.

We believe the answer on appeal must be yes–affirm the District Court’s well-reasoned decision. Congress gave creators and their heirs the right a “second bite at the apple” to regain control of their work after decades, and that promise means little if global rights are excluded. The outcome of this case could either reaffirm that promise—or open the door for multinational publishers to sidestep it entirely.

That’s why we’re sharing friend of the court briefs from across the creative communities. Each one brings a different perspective—but all defend the principle that artists deserve a real, global right to take back what’s theirs, because as Chris said, Congress did not give authors a second bite at half the apple.

Read the brief below, watch this space for case updates.

The AI Safe Harbor is an Unconstitutional Violation of State Protections for Families and Consumers

By Chris Castle

The AI safe harbor slavered onto President Trump’s “big beautiful bill” is layered with intended consequences. Not the least of these is the affect on TikTok.

One of the more debased aspects of TikTok (and that’s a long list) is their promotion through their AI driven algorithms of clearly risky behavior to their pre-teen audience. Don’t forget: TikTok’s algorithm is not just any algorithm. The Chinese government claims it as a state secret. And when the CCP claims a state secret they ain’t playing. So keep that in mind.

One of these risky algorithms that was particularly depraved was called the “Blackout Challenge.” The TikTok “blackout challenge” has been linked to the deaths of at least 20 children over an 18-month period. One of the dead children was Nylah Anderson. Nylah’s mom sued TikTok for her daughter because that’s what moms do. If you’ve ever had someone you love hang themselves, you will no doubt agree that you live with that memory every day of your life. This unspeakable tragedy will haunt Nylah’s mother forever.

Even lowlifes like TikTok should have settled this case and it should never have gotten in front of a judge. But no–TikTok tried to get out of it because Section 230. Yes, that’s right–they killed a child and tried to get out of the responsibility. The District Court ruled that the loathsome Section 230 applied and Nylah’s mom could not pursue her claims. She appealed.

The Third Circuit Court of Appeals reversed and remanded, concluding that “Section 230 immunizes only information ‘provided by another’” and that “here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”

So…a new federal proposal threatens to slam the door on these legal efforts: the 10-year artificial intelligence (AI) safe harbor recently introduced in the House Energy and Commerce Committee. If enacted, this safe harbor would preempt state regulation of AI systems—including the very algorithms and recommendation engines that Nylah’s mom and other families are trying to challenge. 

Section 43201(c) of the “Big Beautiful Bill” includes pork, Silicon Valley style, entitled the “Artificial Intelligence and Information Technology Modernization Initiative: Moratorium,” which states:

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.

The “Initiative” also appropriates “$500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems….” So not only did Big Tech write themselves a safe harbor for their crimes, they also are taking $500,000,000 of corporate welfare to underwrite it courtesy of the very taxpayers they are screwing over. Step aside Sophocles, when it comes to tragic flaws, Oedipus Rex got nothing on these characters.

Platforms like TikTok, YouTube, and Instagram use AI-based recommendation engines to personalize and optimize content delivery. These systems decide what users see based on a combination of behavioral data, engagement metrics, and predictive algorithms. While effective for keeping users engaged, these AI systems have been implicated in promoting harmful content—ranging from pro-suicide material to dangerous ‘challenges’ that have directly resulted in injury or death.

Families across the country have sued these companies, alleging that the AI-driven algorithms knowingly promoted hazardous content to vulnerable users. In many cases, the claims are based on state consumer protection laws, negligence, or wrongful death statutes. Plaintiffs argue that the companies failed in their duty to design safe systems or to warn users about foreseeable dangers. These cases are not attacks on free speech or user-generated content; they focus specifically on the design and operation of proprietary AI systems. 

If you don’t think that these platforms are depraved enough to actually raise safe harbor defenses, just remember what they did to Nylah’s mom–raised the exceptionally depraved Section 230 as a defense to their responsibility in the death of a child.

The AI safe harbor would prohibit states from enacting or enforcing any law that regulates AI systems or automated decision-making technologies for the next 10 years. This sweeping language could easily be interpreted to cover civil liability statutes that hold platforms accountable for the harms their AI systems cause. This is actually even worse than the vile Section 230–the safe harbor would be expressly targeting actual state laws. Maybe after all the appeals, say 20 years from now, we’ll find out that the AI safe harbor is unconstitutional commandeering, but do we really want to wait to find out?

Because these wrongful death lawsuits rely on arguments that an AI algorithm caused harm—either through its design or its predictive content delivery—the companies could argue that the moratorium shields them from liability. They might claim that the state tort claims are an attempt to “regulate” AI in violation of the federal preemption clause. If courts agree, these lawsuits could be dismissed before ever reaching a jury.

This would create a stunning form of corporate immunity even beyond the many current safe harbors for Big Tech: tech companies would be free to deploy powerful, profit-driven AI systems with no accountability in state courts, even when those systems lead directly to preventable deaths. 

The safe harbor would be especially devastating for families who have already suffered tragic losses and are seeking justice. These families rely on state wrongful death laws to hold powerful platforms accountable. Removing that path to accountability would not only deny them closure, but also prevent public scrutiny of the algorithms at the center of these tragedies.

States have long held the authority to define standards of care and impose civil liability for harms caused by negligence or defective products. The moratorium undermines this traditional role by barring states from addressing the specific risks posed by AI systems, even in the context of established tort principles. It would represent one of the broadest federal preemptions of state law in modern history—in the absence of federal regulation of AI platforms.

• In Pennsylvania, the parents of a teenager who committed suicide alleged that Instagram’s algorithmic feed trapped their child in a cycle of depressive content.
• Multiple lawsuits filed under consumer protection and negligence statutes in states like New Jersey, Florida, and Texas seek to hold platforms accountable for designing algorithms that systematically prioritize engagement over safety.
• TikTok faced multiple class action multidistrict litigation claims it illegally harvested user information from its in-app browser.

All of such suits could be in jeopardy if courts interpret the AI moratorium as barring state laws that impose liability on algorithm-driven systems and you can bet that Big Tech platforms will litigate the bejeezus out of the issue. Even if the moratorium was not intended to block wrongful death and other state law claims, its language may be broad enough to do so in practice—especially when leveraged by well-funded corporate legal teams.

Even supporters of federal AI regulation should be alarmed by the breadth of this safe harbor. It is not a thoughtful national framework based on a full record, but a shoot-from-the-hip blanket prohibition on consumer protection and civil justice. By freezing all state-level responses to AI harms, the AI safe harbor is intent on consolidating power in the hands of federal bureaucrats and corporate lobbyists, leaving ordinary Americans with fewer options for recourse, not to mention a clear violation of state police powers and the 10th Amendment.

To add insult to injury, the use of reconciliation to pass this policy—without full hearings, bipartisan debate, or robust public input—only underscores the cynical nature of the strategy. It has nothing to do with the budget aside from the fact that Big Tech is snarfing down $500 million of taxpayer money for no good reason just so they can argue their land grab is “germane” to shoehorn it into reconciliation under the Byrd Rule. It’s a maneuver designed to avoid scrutiny and silence dissent, not to foster a responsible or democratic conversation about how AI should be governed.

At its core, the AI safe harbor is not about fostering innovation—it is about shielding tech platforms from accountability just like the DMCA, Section 230 and Title I of the Music Modernization Act. By preempting state regulation, it could block families from using long-standing wrongful death statutes to seek justice for the loss of their children and laws protecting Americans from other harms. It undermines the sovereignty of states, the dignity of grieving families, and the public’s ability to scrutinize the AI systems that increasingly shape our lives. 

Congress must reject this overreach, and the American public must remain vigilant in demanding transparency, accountability, and justice. The Initiative must go.

[A version of this post first appeared on MusicTechPolicy]