@human_artistry Campaign Letter Opposing AI Safe Harbor Moratorium in Big Beautiful Bill HR 1

Artist Rights Institute is pleased to support the Human Artistry Campaign’s letter to Senators Thune and Schumer opposing the AI safe harbor in the One Big Beautiful Bill Act. ARI joins with:

Opposition is rooted in the most justifiable reasons:

By wiping dozens of state laws off the books, the bill would undermine public safety, creators’ rights, and the ability of local communities to protect themselves from a fast-moving technology that is being rushed to the market by tech giants. State laws protecting people from invasive AI deepfakes would be at risk, along with a range of proposals designed to eliminate discrimination and bias in AI. For artists and creators, preempting state laws requiring Big tech to disclose the material they used to train their models, often to create new products that compete with the human creators’ originals, would make it difficult or impossible to prove this theft has occurred. As the Copyright Office’s Fair Use Report recently reaffirmed, many forms of this conduct are illegal under longstanding federal law. 

The moratorium is so vague that it is unclear whether it would actually prohibit states from addressing construction of data centers or the vast drain on the power grid to implement AI placement in states. This is a safe harbor on steroids and terrible for all creators.

@JayGilbert Discusses Record Release Marketing Strategies

Our friend and long time music marketing consultant Jay Gilbert sits down with Chris Castle to discuss release planning and strategies on Part 3 of the Artist Rights Institute’s Record Release Checklist. You may have seen Jay on podcasts like Your Morning Coffee, Behind the Setlist (with Glenn Peoples) and Michael Brandvold’s Music Biz Weekly.

Jay discusses his excellent Release Planner and made a copy available for download on the Artist Rights Institute Artist Financial Education vertical. You can also listen to the podcast on The Artist Rights Watch podcast.

Don’t miss Parts 1 and 2 on getting your record ready with legal and business issues available on the Financial Education Vertical here and here and checklist for YouTube videos here.

Martina McBride’s Plea for Artist Protection from AI Met with a Congressional Sleight of Hand

This week, country music icon Martina McBride poured her heart out before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Her testimony in support of the bipartisan NO FAKES Act was raw, earnest, and courageous. Speaking as an artist, a mother, and a citizen, she described the emotional weight of having her voice—one that has offered solace and strength to survivors of domestic violence—exploited by AI systems to peddle messages she would never endorse. Her words echoed through the chamber with moral clarity: “Give me the tools to stop that kind of betrayal.”

The NO FAKES Act aims to create a federal property right over an individual’s name, image, and likeness (NIL), offering victims of AI-generated deepfakes a meaningful path to justice. The bill has drawn bipartisan support and commendation from artists’ rights advocates, child protection organizations, and even some technology companies. It represents a sincere attempt to preserve human dignity in the age of machine mimicry.

And yet, while McBride testified in defense of authenticity and integrity, Congress was quietly advancing legislation that was the opposite.

At the same time her testimony was being heard, lawmakers were moving forward with a massive federal budget package ironically called the “Big Beautiful Bill” that includes an AI safe harbor moratorium—a sweeping provision that would strip states of their ability to enforce NIL protections against AI through existing state laws. The so-called “AI Safe Harbor” effectively immunizes AI developers from accountability under most current state-level right-of-publicity and privacy laws, not to mention wire fraud, wrongful death and RICO. It does so in the name of “innovation,” but at the cost of silencing local democratic safeguards and creators of all categories.

Worse yet, the economic scoring of the “Big Beautiful Bill” is based on economic assumptions that rely on productivity gains from AI ripping off all creators from grandma’s baby pictures to rock stars.

The irony is devastating. Martina McBride’s call for justice was sincere and impassioned. But the AI moratorium hanging over the very same legislative session would make it harder—perhaps impossible—for states like Florida, Tennessee, Texas, or California to shield their citizens from the very abuses McBride described. The same Congress that applauded her courage is in the process of handing Silicon Valley a blank check to continue the vulpine lust of its voracious scraping and synthetic exploitation of human expression.

This is not just hypocrisy; it’s the personification of Washington’s two-faced AI policy. On one hand, ceremonial hearings and soaring rhetoric. On the other, buried provisions that serve the interests of the most powerful AI platforms in the world. Oh, and the AI platforms also wrote themselves into the pork fest for $500,000,000 of taxpayers money (more likely debt) for “AI modernization” whatever that is. At a time that the bond market is about to dump all over the U.S. economy. Just another day in the Imperial City.

Let’s be honest: the AI safe harbor moratorium isn’t about protecting innovation. It’s about protecting industrialized theft. It codifies a grotesque and morbid fascination with digital kleptomania—a fetish for the unearned, the repackaged, the replicated.

In that sense, the AI Safe Harbor doesn’t just threaten artists. It perfectly embodies the twisted ethos of modern Silicon Valley, a worldview most grotesquely illustrated by the image of a drooling Sam Altman—the would-be godfather of generative AI—salivating over the limitless data he believes he has a divine right to mine.

Martina McBride called for justice. Congress listened politely. And then gave her to the wolves.

They have a chance to make it right—starting with stripping the radical and extreme safe harbor from the “Big Beautiful Bill.”

[This post first appeared on MusicTechPolicy]

Massive State Opposition to AI Regulation Safe Harbor Moratorium in the ‘One Big Beautiful Bill Act’ (H. Con. Res. 14)

As of this morning, the loathsome AI Safe Harbor is still in the ‘One Big Beautiful Bill Act’ (H. Con. Res. 14) as far as we can tell. There is supposed to be a “manager’s amendment” released this morning yet which will include any changes that were made in last night’s late night session. Watch this space at House Rules for when that managers amendment is released. You will be looking for Section 43201 around page 292. As far as we can tell, the safe harbor is still in there, which is not surprising as it is coming from David Sacks the Silicon Valley Viceroy and White House Crypto Czar.

But the safe harbor—a 10-year moratorium on state and local regulation of artificial intelligence (AI)—has ignited significant opposition from a broad coalition of state officials, lawmakers, and organizations. As you would expect, opponents argue that this measure would undermine existing protections and hinder the ability of states to address AI-related harms. Despite the fact that it was snuck through in the middle of the night, opposition is increasing all the time, but we cannot relent for a moment as Silicon Valley is at it again and wants to hang an AI safe harbor in the lobbyists hunting lodge, right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

State-Level Opposition

A bipartisan group of 40 state attorneys general have voiced strong opposition through NAAG to the AI regulation moratorium. In a letter to Congress, they emphasized that the moratorium would disrupt hundreds of measures being both considered by state legislatures and those that have already passed in states led by Republicans and Democrats. They argue that, in the absence of comprehensive federal AI legislation, states have been at the forefront of protecting consumers.

Organizational Opposition

Beyond state officials, a coalition of 141 organizations—including unions, advocacy groups, non-profits, and academic institutions—have expressed alarm over the proposed safe harbor moratorium. In a letter to Congressional leaders, they warned that the provision could lead to unfettered abuse of AI technologies, undermining critical safeguards such as civil rights protections, privacy standards, and accountability for harmful AI applications.

Notable organizations opposing the moratorium include:

  • Alphabet Workers Union
  • Amazon Employees for Climate Justice
  • Mozilla
  • American Federation of Teachers
  • Center for Democracy and Technology 

We don’t ask you pick up the phone and call your Congress representative very often, but this is one of those times. If you’re not sure who your representatives are, you can go to the House of Representatives website here and look at the upper right hand corner for this box:

You can also go to the 5calls webpage opposing the safe harbor moratorium which is here. They have developed some collateral and talking points for you to draw on if you like.

This is a big damn deal. Let’s get it done. We’ve all done it before, let’s do it again.

A Long-Overdue Win for Artists: CRB’s Web VI Rates Mark Major Step Toward Fairer @SoundExchange Streaming Royalties

In a landmark development for recording artists, the Copyright Royalty Board (CRB) has proposed new royalty rates under the “Web VI” proceeding, covering the period 2026 through 2030. These rates govern how much commercial broadcasters must pay for streaming sound recordings under the statutory licenses set forth in Sections 112 and 114 of the U.S. Copyright Act.

The new rates reflect the culmination of years of advocacy by SoundExchange and artist-rights groups and represent another meaningful upward adjustments in royalty rates. The Copyright Royalty Judges have adopted a meaningful schedule of increases—both in per-stream royalties and in the minimum annual fees webcasters must pay—designed to better align statutory streaming compensation with market realities. (Unlike streaming mechanical rates, webcasting royalties are a penny rate per play.)

A Clear Victory in Numbers

YearWeb V Per-Performance RateWeb VI Per-Performance Rate% Increase Over Web VWeb V Min. Annual FeeWeb VI Min. Annual Fee / % Increase
2026$0.0021$0.0028+33.33%$1,000$1,100 / +10.00%
2027$0.0021$0.0029+38.10%$1,000$1,150 / +15.00%
2028$0.0021$0.0030+42.86%$1,000$1,200 / +20.00%
2029$0.0021$0.0031+47.62%$1,000$1,250 / +25.00%
2030$0.0021$0.0032+52.38%$1,000$1,250 / +25.00%

These increases aren’t merely arithmetic; they represent a philosophical shift in how creators are valued in the digital economy.

Structural Adjustments

Beyond the rate hikes, the CRB has adopted operational changes proposed by SoundExchange to royalty reporting and distribution. For example:

– The late fee for audit-based underpayments is reduced from 1.5% to 1.0% per month, capped at 75% of the total underpayment.
– Starting in 2027, webcasters using third-party vendors must obtain transmission and usage data or contractually guarantee its delivery.
– If a commercial broadcaster fails to file a report of use, SoundExchange may now distribute royalties based on proxy data.

These tweaks aim to close loopholes and increase reliability in royalty tracking—critical steps toward a more transparent system.

The Road Ahead

While the Web VI proposal rule will be final after June 16, 2025, it is already being hailed as a pivotal win by artist advocates. For too long, streaming-era economics have undervalued creators in favor of platforms and intermediaries.

This ruling is a recognition—long overdue and hard-won. When finalized, the Web VI clear and easy to understand rates and terms will not only ensure a greater financial contribution for featured and nonfeatured recording artists and rights holders, but also reassert the foundational principle that creators should be paid fairly when their work fuels billion-dollar platforms.

For artists and musicians navigating a shifting industry, the law is catching up with the market it governs on the side of the creators who drive the business.

Of course, don’t forget that some of these same broadcasters who pay under the statutory license for streaming do not pay anything to artists for over the air broadcast of terrestrial radio for the exact same plays of the exact same records–another reason that Congress must finally pass the American Music Fairness Act. That’s why we support the #IRespectMusic campaign and the MusicFirst Coalition. Ask Congress to support musicians here.

The AI Safe Harbor is an Unconstitutional Violation of State Protections for Families and Consumers

By Chris Castle

The AI safe harbor slavered onto President Trump’s “big beautiful bill” is layered with intended consequences. Not the least of these is the affect on TikTok.

One of the more debased aspects of TikTok (and that’s a long list) is their promotion through their AI driven algorithms of clearly risky behavior to their pre-teen audience. Don’t forget: TikTok’s algorithm is not just any algorithm. The Chinese government claims it as a state secret. And when the CCP claims a state secret they ain’t playing. So keep that in mind.

One of these risky algorithms that was particularly depraved was called the “Blackout Challenge.” The TikTok “blackout challenge” has been linked to the deaths of at least 20 children over an 18-month period. One of the dead children was Nylah Anderson. Nylah’s mom sued TikTok for her daughter because that’s what moms do. If you’ve ever had someone you love hang themselves, you will no doubt agree that you live with that memory every day of your life. This unspeakable tragedy will haunt Nylah’s mother forever.

Even lowlifes like TikTok should have settled this case and it should never have gotten in front of a judge. But no–TikTok tried to get out of it because Section 230. Yes, that’s right–they killed a child and tried to get out of the responsibility. The District Court ruled that the loathsome Section 230 applied and Nylah’s mom could not pursue her claims. She appealed.

The Third Circuit Court of Appeals reversed and remanded, concluding that “Section 230 immunizes only information ‘provided by another’” and that “here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”

So…a new federal proposal threatens to slam the door on these legal efforts: the 10-year artificial intelligence (AI) safe harbor recently introduced in the House Energy and Commerce Committee. If enacted, this safe harbor would preempt state regulation of AI systems—including the very algorithms and recommendation engines that Nylah’s mom and other families are trying to challenge. 

Section 43201(c) of the “Big Beautiful Bill” includes pork, Silicon Valley style, entitled the “Artificial Intelligence and Information Technology Modernization Initiative: Moratorium,” which states:

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.

The “Initiative” also appropriates “$500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems….” So not only did Big Tech write themselves a safe harbor for their crimes, they also are taking $500,000,000 of corporate welfare to underwrite it courtesy of the very taxpayers they are screwing over. Step aside Sophocles, when it comes to tragic flaws, Oedipus Rex got nothing on these characters.

Platforms like TikTok, YouTube, and Instagram use AI-based recommendation engines to personalize and optimize content delivery. These systems decide what users see based on a combination of behavioral data, engagement metrics, and predictive algorithms. While effective for keeping users engaged, these AI systems have been implicated in promoting harmful content—ranging from pro-suicide material to dangerous ‘challenges’ that have directly resulted in injury or death.

Families across the country have sued these companies, alleging that the AI-driven algorithms knowingly promoted hazardous content to vulnerable users. In many cases, the claims are based on state consumer protection laws, negligence, or wrongful death statutes. Plaintiffs argue that the companies failed in their duty to design safe systems or to warn users about foreseeable dangers. These cases are not attacks on free speech or user-generated content; they focus specifically on the design and operation of proprietary AI systems. 

If you don’t think that these platforms are depraved enough to actually raise safe harbor defenses, just remember what they did to Nylah’s mom–raised the exceptionally depraved Section 230 as a defense to their responsibility in the death of a child.

The AI safe harbor would prohibit states from enacting or enforcing any law that regulates AI systems or automated decision-making technologies for the next 10 years. This sweeping language could easily be interpreted to cover civil liability statutes that hold platforms accountable for the harms their AI systems cause. This is actually even worse than the vile Section 230–the safe harbor would be expressly targeting actual state laws. Maybe after all the appeals, say 20 years from now, we’ll find out that the AI safe harbor is unconstitutional commandeering, but do we really want to wait to find out?

Because these wrongful death lawsuits rely on arguments that an AI algorithm caused harm—either through its design or its predictive content delivery—the companies could argue that the moratorium shields them from liability. They might claim that the state tort claims are an attempt to “regulate” AI in violation of the federal preemption clause. If courts agree, these lawsuits could be dismissed before ever reaching a jury.

This would create a stunning form of corporate immunity even beyond the many current safe harbors for Big Tech: tech companies would be free to deploy powerful, profit-driven AI systems with no accountability in state courts, even when those systems lead directly to preventable deaths. 

The safe harbor would be especially devastating for families who have already suffered tragic losses and are seeking justice. These families rely on state wrongful death laws to hold powerful platforms accountable. Removing that path to accountability would not only deny them closure, but also prevent public scrutiny of the algorithms at the center of these tragedies.

States have long held the authority to define standards of care and impose civil liability for harms caused by negligence or defective products. The moratorium undermines this traditional role by barring states from addressing the specific risks posed by AI systems, even in the context of established tort principles. It would represent one of the broadest federal preemptions of state law in modern history—in the absence of federal regulation of AI platforms.

• In Pennsylvania, the parents of a teenager who committed suicide alleged that Instagram’s algorithmic feed trapped their child in a cycle of depressive content.
• Multiple lawsuits filed under consumer protection and negligence statutes in states like New Jersey, Florida, and Texas seek to hold platforms accountable for designing algorithms that systematically prioritize engagement over safety.
• TikTok faced multiple class action multidistrict litigation claims it illegally harvested user information from its in-app browser.

All of such suits could be in jeopardy if courts interpret the AI moratorium as barring state laws that impose liability on algorithm-driven systems and you can bet that Big Tech platforms will litigate the bejeezus out of the issue. Even if the moratorium was not intended to block wrongful death and other state law claims, its language may be broad enough to do so in practice—especially when leveraged by well-funded corporate legal teams.

Even supporters of federal AI regulation should be alarmed by the breadth of this safe harbor. It is not a thoughtful national framework based on a full record, but a shoot-from-the-hip blanket prohibition on consumer protection and civil justice. By freezing all state-level responses to AI harms, the AI safe harbor is intent on consolidating power in the hands of federal bureaucrats and corporate lobbyists, leaving ordinary Americans with fewer options for recourse, not to mention a clear violation of state police powers and the 10th Amendment.

To add insult to injury, the use of reconciliation to pass this policy—without full hearings, bipartisan debate, or robust public input—only underscores the cynical nature of the strategy. It has nothing to do with the budget aside from the fact that Big Tech is snarfing down $500 million of taxpayer money for no good reason just so they can argue their land grab is “germane” to shoehorn it into reconciliation under the Byrd Rule. It’s a maneuver designed to avoid scrutiny and silence dissent, not to foster a responsible or democratic conversation about how AI should be governed.

At its core, the AI safe harbor is not about fostering innovation—it is about shielding tech platforms from accountability just like the DMCA, Section 230 and Title I of the Music Modernization Act. By preempting state regulation, it could block families from using long-standing wrongful death statutes to seek justice for the loss of their children and laws protecting Americans from other harms. It undermines the sovereignty of states, the dignity of grieving families, and the public’s ability to scrutinize the AI systems that increasingly shape our lives. 

Congress must reject this overreach, and the American public must remain vigilant in demanding transparency, accountability, and justice. The Initiative must go.

[A version of this post first appeared on MusicTechPolicy]

Big Beautiful AI Safe Harbor asks If David Sacks wants to Make America Screwed Again?

In a dramatic turn of events, Congress is quietly advancing a 10-year federal safe harbor for Big Tech that would block any state and local regulation of artificial intelligence (AI). That safe harbor would give Big Tech another free ride on the backs of artists, authors, consumers, all of us and our children. It would stop cold the enforcement of state laws to protect consumers like the $1.370 billion dollar settlement Google reached with the State of Texas last week for grotesque violations of user privacy. The bill would go up on Big Tech’s trophy wall right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

Introduced through the House Energy and Commerce Committee as part of a broader legislative package branded with President Trump’s economic agenda, this safe harbor would prevent states from enforcing or enacting any laws that address the development, deployment, or oversight of AI systems. While couched as a measure to ensure national uniformity and spur innovation, this proposal carries serious consequences for consumer protection, data privacy, and state sovereignty. It threatens to erase hard-fought state-level protections that shield Americans from exploitative child snooping, data scraping, biometric surveillance, and the unauthorized use of personal and all creative works. This post unpacks how we got here, why it matters, and what can still be done to stop it.

The Origins of the New Safe Harbor

The roots of the latest AI safe harbor lie in a growing push from Silicon Valley-aligned political operatives and venture capital influencers, many of whom fear a patchwork of state-level consumer protection laws that would stop AI data scraping. Among the most vocal proponents is tech entrepreneur-turned White House crypto czar David Sacks, who has advocated for federal preemption of state AI rules in order to protect startup innovation from what he and others call regulatory overreach also known as state “police powers” to protect state residents.

If my name was “Sacks” I’d probably be a bit careful about doing things that could get me fired. His influence reportedly played a role in shaping the safe harbor’s timing and language, leveraging connections on Capitol Hill to attach it to a larger pro-business package of legislation. That package—marketed as a pillar of President Trump’s economic plan—was seen as a convenient vehicle to slip through controversial provisions with minimal scrutiny. You know, let’s sneak one past the boss.

Why This Is Dangerous for Consumers and Creators

The most immediate danger of the AI safe harbor is its preemption of state protections at a time when AI technologies are accelerating unchecked. States like California, Illinois, and Virginia have enacted—or are considering—laws to limit how companies use AI to analyze facial features, scan emails, extract audio, or mine creative works from social media. The AI mantra is that they can snarf down “publicly available data” which essentially means everything that’s not behind a paywall. Because there is no federal AI regulation yet, state laws are crucial for protecting vulnerable populations, including children whose photos and personal information are shared by parents online. Under the proposed AI safe harbor, such protections would be nullified for 10 years–and don’t think it won’t be renewed.

Without the ability to regulate AI at the state level, we could see our biometric data harvested without consent. Social media posts—including photos of babies, families, and school events—could be scraped and used to train commercial AI systems without transparency or recourse. Creators across all copyright categories could find their works ingested into large language models and generative tools without license or attribution. Emails and other personal communications could be fed into AI systems for profiling, advertising, or predictive decision-making without oversight.

While federal regulation of AI is certainly coming this AI safe harbor includes no immediate substitute. Instead, it freezes state level regulatory development entirely for a decade—an eternity in the technology world—during which time the richest companies in the history of commerce can entrench themselves further with little fear of accountability. And it likely will provide a blueprint for federal legislation when it comes.

A Strategic Misstep for Trump’s Economic Agenda: Populism or Make America Screwed Again?

Ironically, attaching the moratorium to a legislative package meant to symbolize national renewal may ultimately undermine the very populist and sovereignty-based themes that President Trump has championed. By insulating Silicon Valley firms from state scrutiny, the legislation effectively prioritizes the interests of data-rich corporations over the privacy and rights of ordinary Americans. It hands a victory to unelected tech executives and undercuts the authority of governors, state legislators, and attorney generals who have stepped in where federal law has lagged behind. So much for that states are “laboratories of democracy” jazz.

Moreover, the manner in which the safe harbor was advanced legislatively—slipped into what is supposed to be a reconciliation bill without extensive hearings or stakeholder input—is classic pork and classic Beltway maneuvering in smoke filled rooms. Critics from across the political spectrum have noted that such tactics cheapen the integrity of any legislation they touch and reflect the worst of Washington horse-trading.

What Can Be Done to Stop It

The AI safe harbor is not a done deal. There are several procedural and political tools available to block or remove it from the broader legislative package.

1. Committee Intervention – Lawmakers on the House Energy and Commerce Committee or the Rules Committee can offer amendments to strip or revise the moratorium before it proceeds to the full House.
2. House Floor Action – Opponents of the moratorium can offer floor amendments during debate to strike the provision. This requires coordination and support from members across both parties.
3. Senate “Byrd Rule” Challenge and Holds – Because reconciliation bills must be budget-related, the Senate Parliamentarian can strike the safe harbor if it’s deemed “non-germane” which it certainly seems to be. Senators can formally raise this challenge.
4. Conference Committee Negotiation – If different versions of the legislation pass the House and Senate, the final language will be hashed out in conference. There is still time to remove the moratorium here.
5. Public Advocacy – Artists, parents, consumer advocates, and especially state officials can apply pressure through media, petitions, and direct outreach to lawmakers, highlighting the harms and democratic risks of federal preemption. States may be able to sue to block the safe harbor as unconstitutional (see Chris’s discussion of constitutionality) but let’s not wait to get to that point. It must be said that any such litigation poses a threat to Trump’s “Big Beautiful Bill” courtesy of David Sacks.

Conclusion

The AI safe harbor may have been introduced quietly, but there’s a growing backlash from all corners. Its consequences would be anything but subtle. If enacted, it would freeze innovation in AI accountability, strip states of their ability to protect residents, and expose Americans to widespread digital exploitation. While marketed as pro-innovation, the safe harbor looks more like a gift to data-hungry monopolies at the expense of federalist principles and individual rights.

It’s not too late to act, but doing so requires vigilance, transparency, and an insistence that even the most powerful Big Tech oligarchs remain subject to democratic oversight.

@ArtistRights Institute Newsletter 5/5/25

The Artist Rights Watch podcast returns for another season! This week’s episode features Chris Castle on An Artist’s Guide to Record Releases Part 2. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

Texas Scalpers Bill of Rights Legislation

Can this Texas House bill help curb high ticket prices? Depends whom you ask (Marcheta Fornoff/KERA News)

Texas lawmakers target ticket fees and resale restrictions in new legislative push (Abigail Velez/CBS Austin)

@ArtistRights Institute opposes Texas Ticketing Legislation the “Scalpers’ Bill of Rights” (Chris Castle/Artist Rights Watch)

Streaming

Spotify’s Earnings Points To A “Catch Up” On Songwriter Royalties At Crb For Royalty Justice (Chris Castle/MusicTechPolicy)

Streaming Is Now Just As Crowded With Ads As Old School TV (Rick Porter/Hollywood Reporter)

Spotify Stock Falls On Music Streamer’s Mixed Q1 Report (Patrick Seitz/Investors Business Daily)

Economy

The Slowdown at Ports Is a Warning of Rough Economic Seas Ahead (Aarian Marshall/Wired)

What To Expect From Wednesday’s Federal Reserve Meeting (Diccon Hyatt/Investopedia)

Spotify Q1 2025 Earnings Call: Daniel Ek Talks Growth, Pricing, Superfan Products, And A Future Where The Platform Could Reach 1bn Subscribers (Murray Stassen/Music Business Worldwide)

Artist Rights and AI

SAG-AFTRA National Board Approves Commercials Contracts That Prevent AI, Digital Replicas Without Consent (JD Knapp/The Wrap)

Generative AI providers see first steps for EU code of practice on content labels (Luca Bertuzzi/Mlex)

A Judge Says Meta’s AI Copyright Case Is About ‘the Next Taylor Swift’ (Kate Knibbs/Wired)

Antitrust

Google faces September trial on ad tech antitrust remedies (David Shepardson and Jody Godoy/Reuters)

TikTok

Ireland fines TikTok 530 million euros for sending EU user data to China (Ryan Browne/CNBC)