Martina McBride’s Plea for Artist Protection from AI Met with a Congressional Sleight of Hand

This week, country music icon Martina McBride poured her heart out before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Her testimony in support of the bipartisan NO FAKES Act was raw, earnest, and courageous. Speaking as an artist, a mother, and a citizen, she described the emotional weight of having her voice—one that has offered solace and strength to survivors of domestic violence—exploited by AI systems to peddle messages she would never endorse. Her words echoed through the chamber with moral clarity: “Give me the tools to stop that kind of betrayal.”

The NO FAKES Act aims to create a federal property right over an individual’s name, image, and likeness (NIL), offering victims of AI-generated deepfakes a meaningful path to justice. The bill has drawn bipartisan support and commendation from artists’ rights advocates, child protection organizations, and even some technology companies. It represents a sincere attempt to preserve human dignity in the age of machine mimicry.

And yet, while McBride testified in defense of authenticity and integrity, Congress was quietly advancing legislation that was the opposite.

At the same time her testimony was being heard, lawmakers were moving forward with a massive federal budget package ironically called the “Big Beautiful Bill” that includes an AI safe harbor moratorium—a sweeping provision that would strip states of their ability to enforce NIL protections against AI through existing state laws. The so-called “AI Safe Harbor” effectively immunizes AI developers from accountability under most current state-level right-of-publicity and privacy laws, not to mention wire fraud, wrongful death and RICO. It does so in the name of “innovation,” but at the cost of silencing local democratic safeguards and creators of all categories.

Worse yet, the economic scoring of the “Big Beautiful Bill” is based on economic assumptions that rely on productivity gains from AI ripping off all creators from grandma’s baby pictures to rock stars.

The irony is devastating. Martina McBride’s call for justice was sincere and impassioned. But the AI moratorium hanging over the very same legislative session would make it harder—perhaps impossible—for states like Florida, Tennessee, Texas, or California to shield their citizens from the very abuses McBride described. The same Congress that applauded her courage is in the process of handing Silicon Valley a blank check to continue the vulpine lust of its voracious scraping and synthetic exploitation of human expression.

This is not just hypocrisy; it’s the personification of Washington’s two-faced AI policy. On one hand, ceremonial hearings and soaring rhetoric. On the other, buried provisions that serve the interests of the most powerful AI platforms in the world. Oh, and the AI platforms also wrote themselves into the pork fest for $500,000,000 of taxpayers money (more likely debt) for “AI modernization” whatever that is. At a time that the bond market is about to dump all over the U.S. economy. Just another day in the Imperial City.

Let’s be honest: the AI safe harbor moratorium isn’t about protecting innovation. It’s about protecting industrialized theft. It codifies a grotesque and morbid fascination with digital kleptomania—a fetish for the unearned, the repackaged, the replicated.

In that sense, the AI Safe Harbor doesn’t just threaten artists. It perfectly embodies the twisted ethos of modern Silicon Valley, a worldview most grotesquely illustrated by the image of a drooling Sam Altman—the would-be godfather of generative AI—salivating over the limitless data he believes he has a divine right to mine.

Martina McBride called for justice. Congress listened politely. And then gave her to the wolves.

They have a chance to make it right—starting with stripping the radical and extreme safe harbor from the “Big Beautiful Bill.”

[This post first appeared on MusicTechPolicy]

The AI Safe Harbor is an Unconstitutional Violation of State Protections for Families and Consumers

By Chris Castle

The AI safe harbor slavered onto President Trump’s “big beautiful bill” is layered with intended consequences. Not the least of these is the affect on TikTok.

One of the more debased aspects of TikTok (and that’s a long list) is their promotion through their AI driven algorithms of clearly risky behavior to their pre-teen audience. Don’t forget: TikTok’s algorithm is not just any algorithm. The Chinese government claims it as a state secret. And when the CCP claims a state secret they ain’t playing. So keep that in mind.

One of these risky algorithms that was particularly depraved was called the “Blackout Challenge.” The TikTok “blackout challenge” has been linked to the deaths of at least 20 children over an 18-month period. One of the dead children was Nylah Anderson. Nylah’s mom sued TikTok for her daughter because that’s what moms do. If you’ve ever had someone you love hang themselves, you will no doubt agree that you live with that memory every day of your life. This unspeakable tragedy will haunt Nylah’s mother forever.

Even lowlifes like TikTok should have settled this case and it should never have gotten in front of a judge. But no–TikTok tried to get out of it because Section 230. Yes, that’s right–they killed a child and tried to get out of the responsibility. The District Court ruled that the loathsome Section 230 applied and Nylah’s mom could not pursue her claims. She appealed.

The Third Circuit Court of Appeals reversed and remanded, concluding that “Section 230 immunizes only information ‘provided by another’” and that “here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”

So…a new federal proposal threatens to slam the door on these legal efforts: the 10-year artificial intelligence (AI) safe harbor recently introduced in the House Energy and Commerce Committee. If enacted, this safe harbor would preempt state regulation of AI systems—including the very algorithms and recommendation engines that Nylah’s mom and other families are trying to challenge. 

Section 43201(c) of the “Big Beautiful Bill” includes pork, Silicon Valley style, entitled the “Artificial Intelligence and Information Technology Modernization Initiative: Moratorium,” which states:

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.

The “Initiative” also appropriates “$500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems….” So not only did Big Tech write themselves a safe harbor for their crimes, they also are taking $500,000,000 of corporate welfare to underwrite it courtesy of the very taxpayers they are screwing over. Step aside Sophocles, when it comes to tragic flaws, Oedipus Rex got nothing on these characters.

Platforms like TikTok, YouTube, and Instagram use AI-based recommendation engines to personalize and optimize content delivery. These systems decide what users see based on a combination of behavioral data, engagement metrics, and predictive algorithms. While effective for keeping users engaged, these AI systems have been implicated in promoting harmful content—ranging from pro-suicide material to dangerous ‘challenges’ that have directly resulted in injury or death.

Families across the country have sued these companies, alleging that the AI-driven algorithms knowingly promoted hazardous content to vulnerable users. In many cases, the claims are based on state consumer protection laws, negligence, or wrongful death statutes. Plaintiffs argue that the companies failed in their duty to design safe systems or to warn users about foreseeable dangers. These cases are not attacks on free speech or user-generated content; they focus specifically on the design and operation of proprietary AI systems. 

If you don’t think that these platforms are depraved enough to actually raise safe harbor defenses, just remember what they did to Nylah’s mom–raised the exceptionally depraved Section 230 as a defense to their responsibility in the death of a child.

The AI safe harbor would prohibit states from enacting or enforcing any law that regulates AI systems or automated decision-making technologies for the next 10 years. This sweeping language could easily be interpreted to cover civil liability statutes that hold platforms accountable for the harms their AI systems cause. This is actually even worse than the vile Section 230–the safe harbor would be expressly targeting actual state laws. Maybe after all the appeals, say 20 years from now, we’ll find out that the AI safe harbor is unconstitutional commandeering, but do we really want to wait to find out?

Because these wrongful death lawsuits rely on arguments that an AI algorithm caused harm—either through its design or its predictive content delivery—the companies could argue that the moratorium shields them from liability. They might claim that the state tort claims are an attempt to “regulate” AI in violation of the federal preemption clause. If courts agree, these lawsuits could be dismissed before ever reaching a jury.

This would create a stunning form of corporate immunity even beyond the many current safe harbors for Big Tech: tech companies would be free to deploy powerful, profit-driven AI systems with no accountability in state courts, even when those systems lead directly to preventable deaths. 

The safe harbor would be especially devastating for families who have already suffered tragic losses and are seeking justice. These families rely on state wrongful death laws to hold powerful platforms accountable. Removing that path to accountability would not only deny them closure, but also prevent public scrutiny of the algorithms at the center of these tragedies.

States have long held the authority to define standards of care and impose civil liability for harms caused by negligence or defective products. The moratorium undermines this traditional role by barring states from addressing the specific risks posed by AI systems, even in the context of established tort principles. It would represent one of the broadest federal preemptions of state law in modern history—in the absence of federal regulation of AI platforms.

• In Pennsylvania, the parents of a teenager who committed suicide alleged that Instagram’s algorithmic feed trapped their child in a cycle of depressive content.
• Multiple lawsuits filed under consumer protection and negligence statutes in states like New Jersey, Florida, and Texas seek to hold platforms accountable for designing algorithms that systematically prioritize engagement over safety.
• TikTok faced multiple class action multidistrict litigation claims it illegally harvested user information from its in-app browser.

All of such suits could be in jeopardy if courts interpret the AI moratorium as barring state laws that impose liability on algorithm-driven systems and you can bet that Big Tech platforms will litigate the bejeezus out of the issue. Even if the moratorium was not intended to block wrongful death and other state law claims, its language may be broad enough to do so in practice—especially when leveraged by well-funded corporate legal teams.

Even supporters of federal AI regulation should be alarmed by the breadth of this safe harbor. It is not a thoughtful national framework based on a full record, but a shoot-from-the-hip blanket prohibition on consumer protection and civil justice. By freezing all state-level responses to AI harms, the AI safe harbor is intent on consolidating power in the hands of federal bureaucrats and corporate lobbyists, leaving ordinary Americans with fewer options for recourse, not to mention a clear violation of state police powers and the 10th Amendment.

To add insult to injury, the use of reconciliation to pass this policy—without full hearings, bipartisan debate, or robust public input—only underscores the cynical nature of the strategy. It has nothing to do with the budget aside from the fact that Big Tech is snarfing down $500 million of taxpayer money for no good reason just so they can argue their land grab is “germane” to shoehorn it into reconciliation under the Byrd Rule. It’s a maneuver designed to avoid scrutiny and silence dissent, not to foster a responsible or democratic conversation about how AI should be governed.

At its core, the AI safe harbor is not about fostering innovation—it is about shielding tech platforms from accountability just like the DMCA, Section 230 and Title I of the Music Modernization Act. By preempting state regulation, it could block families from using long-standing wrongful death statutes to seek justice for the loss of their children and laws protecting Americans from other harms. It undermines the sovereignty of states, the dignity of grieving families, and the public’s ability to scrutinize the AI systems that increasingly shape our lives. 

Congress must reject this overreach, and the American public must remain vigilant in demanding transparency, accountability, and justice. The Initiative must go.

[A version of this post first appeared on MusicTechPolicy]

@ArtistRights Newsletter 4/14/25

The Artist Rights Watch podcast returns for another season! This week’s episode features AI Legislation, A View from Europe: Helienne Lindvall, President of the European Composer and Songwriter Alliance (ECSA) and ARI Director Chris Castle in conversation regarding current issues for creators regarding the EU AI Act and the UK Text and Data Mining legislation. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

AI Litigation: Kadrey v. Meta

Law Professors Reject Meta’s Fair Use Defense in Friend of the Court Brief

Ticketing
Viagogo failing to prevent potentially unlawful practices, listings on resale site suggest that scalpers are speculatively selling tickets they do not yet have (Rob Davies/The Guardian)

ALEC Astroturf Ticketing Bill Surfaces in North Carolina Legislation

ALEC Ticketing Bill Surfaces in Texas to Rip Off Texas Artists (Chris Castle/MusicTechPolicy)

International AI Legislation

Brazil’s AI Act: A New Era of AI Regulation (Daniela Atanasovska and Lejla Robeli/GDPR Local)

Why robots.txt won’t get it done for AI Opt Outs (Chris Castle/MusicTechPolicy)

Feature TranslationHow has the West’s misjudgment of China’s AI ecosystem distorted the global technology competition landscape (Jeffrey Ding/ChinAI)

Unethical AI Training Harms Creators and Society, Argues AI Pioneer (Ed Nawotka/Publishers Weekly) 

AI Ethics

Céline Dion Calls Out AI-Generated Music Claiming to Feature the Iconic Singer Without Her Permission (Marina Watts/People)

Splice CEO Discusses Ethical Boundaries of AI in Music​ (Nilay Patel/The Verge)

Spotify’s Bold AI Gamble Could Disrupt The Entire Music Industry (Bernard Marr/Forbes)

Books

Apple in China: The Capture of the World’s Greatest Company by Patrick McGee (Coming May 13)

Spotify Makes Kate Nash’s Argument With the Usual Blame Game

Daniel Ek is indifferent to whether the economics of streaming causes artists to give up or actually starve to actual death. He’s already got the tracks and he’ll keep selling them forever like an evil self-licking ice cream cone.

Kate Nash is the latest artist to slam Spotify’s pathetic royalty payments even after the payola and the streaming manipulation with the Orwellian “Discovery Mode” as discovered by Liz Pelly. According to Digital Music News, Kate Nash says: 

“‘Foundations’ has over 100 million plays on Spotify — and I’m shocked I’m not a millionaire when I hear that! I’m shocked at the state of the music industry and how the industry has allowed this to happen,” said Nash. “We’re paid very, very, very poorly and unethically for our recorded music: it’s like 0.003 of a penny per stream. I think we should not only be paid fairly, but we should be paid very well. People love music and it’s a growing economy and there are plenty of millionaires in the industry because of that, and our music.”

But then she said the quiet part out loud that will get them right in their Portlandia hearts:

She added: “And what they’re saying to artists from non-rich privileged backgrounds, which is you’re not welcome here, you can’t do this, we don’t want to hear from you. Because it’s not possible to even imagine having a career if you don’t have a privileged background or a privileged situation right now.”

This, of course, comes the same time that Spotify board members have cashed out over $1 billion in stock including hundreds of millions to Daniel Ek personally, speaking of privilege.

Using forks and knives to eat their bacon

Spotify responds with the same old whine that starts with the usual condescending drivel, deflection and distraction:

“We’re huge fans of Kate Nash. For streams of her track ‘Foundations’ alone — which was released before Spotify existed — Spotify has paid out around half a million pounds in revenue to Kate Nash’s rights holders,” reads Spotify’s statement.

“Her most streamed songs were released via Universal Music Group. Spotify has no visibility over the deals that Kate signed with her rights holders. Therefore, we have no knowledge of the payment terms that were agreed upon between her and her partners.”

This is a very carefully worded statement–notice that they switch from the specific to the general and start talking about “her rights holders”. That means no doubt that they are including the songwriters and publishers of the compositions, so that’s bullshit for starters. But notice how they are making Kate’s own argument here by trying to get you to focus on the “big check” that they wrote to Universal.

Well, last time I checked in the world of arithmetic, “around half a million pounds” (which means less than, but OK) divided by 100,000,000 streams is…wait for it…shite. £0.005 per stream–at the Universal level but all-in by the sound of it, i.e., artist share, label share, songwriters and publishers. This is why Spotify is making Kate’s argument at the same time they are trying to deflect attention onto Universal.

Then–always with an eye on the DCMS authorities in the UK and the UK Parliament, Spotify says:

“We do know that British artists generated revenues of over £750 million on Spotify alone in 2023 — a number that is on the rise year on year — so it’s disappointing to hear that Spotify’s payments are not making it through to Kate herself,” the company concluded.

Oh, so “disappointed.” Please spare us. What’s disappointing is that the streaming services participate in this charade where their executives make more in one day of stock trading than the company’s entire payments to UK artists and songwriters.

This race to the bottom is not lost on artists.  Al Yankovic, a card-carrying member of the pantheon of music parodists from Tom Leher to Spinal Tap to The Rutles, released a hysterical video about his “Spotify Wrapped” account.  

Al said he’d had 80 million streams and received enough cash from Spotify to buy a $12 sandwich.  This was from an artist who made a decades-long career from—parody.  Remember that–parody.

Do you think he really meant he actually got $12 for 80 million streams?  Or could that have been part of the gallows humor of calling out Spotify Wrapped as a propaganda tool for…Spotify?  Poking fun at the massive camouflage around the Malthusian algebra of streaming royalties gradually choking the life out of artists and songwriters? Gallows humor, indeed, because a lot of artists and especially songwriters are gradually collapsing as the algebra predicted.

The services took the bait Al dangled, and they seized upon Al’s video poking fun at how ridiculously low Spotify payments are to make a point about how Al’s sandwich price couldn’t possibly be 80 million streams and if it were, it’s his label’s fault.  Just like Spotify is blaming Universal rather than take responsibility for once in their lives. 

Nothing if not on message, right? As Daniel Ek told MusicAlly, “There is a narrative fallacy here, combined with the fact that, obviously, some artists that used to do well in the past may not do well in this future landscape, where you can’t record music once every three to four years and think that’s going to be enough.” This is kind of like TikTok bragging about how few children hung themselves in the latest black out challenge compared to the number of all children using the platform. Pretty Malthusian. It’s not a fallacy; it’s all too true.

I’d suggest that Al and Kate Nash were each making the point–if you think of everyday goods, like bacon for example, in terms of how many streams you would have to sell in order to buy a pound of bacon, a dozen eggs, a gallon of gasoline, Internet access, or a sandwich in a nice restaurant, you start to understand that the joke really is on us. The best way to make a small fortune in the streaming business is to start with a large one. Unless you’re a Spotify executive, of course.

@Artist Rights Institute Newsletter 4/7/25

The Artist Rights Institute’s news digest Newsletter

The Artist Rights Watch podcast returns for another season!  First episode is Tim Kappel discussing the Vetter v. Resnik landmark copyright termination case. Follow us wherever you get your podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

Streaming Meltdown

White Noise Is Hugely Popular on Streaming Services. Should It Be Devalued? (Kristin Robinson/Billboard) (Subscription)

Polly Pockets Strikes Again: DANIEL EK POCKETS ANOTHER $27.6M FROM SELLING SPOTIFY SHARES – CASHING OUT OVER $750M SINCE 2023 (Mandy Daludgug/MusicBusinessWorldwide)

THY ART IS MURDER Vocalist Quits Over Finances: “I Can’t Live Like This Anymore” (Robert Pasbani/Metal Injection)

AI Litigation

U.S. District Judge Sidney Stein order in New York Times et al v. Microsoft, OpenAI et al

NYT v MSFT-OpenAI MTDDownload

Judge explains order for New York Times in OpenAI copyright case (Blake Brittan/Reuters)

OpenAI, Google reject UK’s AI copyright plan (Joseph Bambridge/Politico EU)

Mechanical Licensing Collective

Shhh…It’s a Secret! How is the MLC “Hedge Fund” Performing in the Global Market Crash (Chris Castle/MusicTechPolicy)

Ticketing

If it Looks Like a Duck and Quacks Like a Duck, Deny Everything: The ALEC Ticketing Bill Surfaces in Texas to Rip Off Artists (Chris Castle/MusicTechPolicy)

Tickets to Beyonce’s ‘Cowboy Carter’ Shows Bottoming Out at $25 In LA, New Jersey (Ashley King/Digital Music News)

TikTok Divestment

TikTok Extended Again (Chris Castle/MusicTech.Solutions)

And After All That, TikTok Could Still Go Poof (Paul Resnikoff/Digital Music News)

Books

Understanding the China Threat by Lianchao Han and Bradley A. Thayer

Brookings experts’ reading list on US-China strategic relations

Global Soft Power Index 2024 by Konrad Jagodzinski/Brand Finance

TikTok Sale Extended…Again

By Chris Castle

Imagine if the original Napster had received TikTok-level attention from POTUS?  Forget I said that.  The ongoing divestment of TikTok from its parent company ByteDance has reached yet another critical point with yet another bandaid.  Congress originally set a January 19, 2025 deadline for ByteDance to either sell TikTok’s U.S. operations or face a potential ban in the United States as part of the Protecting Americans from Foreign Adversary Controlled Applications Act or “PAFACA” (I guess “covfefe” was taken). The US Supreme Court upheld that law in TikTok v. Garland.

When January 20 came around, President Trump gave ByteDance an extension to April 5, 2025 by executive order. When that deadline came, President Trump granted an extension to the extension to the January 19 deadline by another executive order, providing additional time for ByteDance to finalize a deal to divest. The extended deadline now pushes the timeline for divestment negotiations to July 1, 2025.

This new extension is designed to allow for further negotiation time among ByteDance, potential buyers, and regulatory authorities, while addressing the ongoing trade issues and concerns raised by both the U.S. and Chinese governments. 

It’s getting mushy, but I’ll take a stab at the status of the divestment process. I might miss someone as they’re all getting into the act. 

I would point out that all these bids anticipate a major overhaul in how TikTok operates which—just sayin’—means it likely would no longer be TikTok as its hundreds of millions of users now know it.  I went down this path with Napster, and I would just say that it’s a very big deal to change a platform that has inherent legal issues into one that satisfies a standard that does not yet exist.  I always used the rule of thumb that changing old Napster to new Napster (neither of which had anything to do with the service that eventually launched with the “Napster” brand but bore no resemblance to original Napster or its DNA) would result in an initial loss of 90% of the users. Just sayin’.

Offers and Terms

Multiple parties have expressed interest in acquiring TikTok’s U.S. operations, but the terms of these offers remain fluid due to ongoing negotiations and the complexity of the deal. Key bidders include:

ByteDance Investors:  According to Reuters, “the biggest non-Chinese investors in parent company ByteDance to up their stakes and acquire the short video app’s U.S. operations.” This would involve Susquehanna International Group, General Atlantic, and KKR. ByteDance looks like it retains a minority ownership position of less than 20%, which I would bet probably means 19.99999999% or something like that. Reuters describes this as the front runner bid, and I tend to buy into that characterization. From a cap table point of view, this would be the cleanest with the least hocus pocus. However, the Reuters story is based on anonymous sources and doesn’t say how the deal would address the data privacy issues (other than that Oracle would continue to hold the data), or the algorithm. Remember, Oracle has been holding the data and that evidently has been unsatisfactory to Congress which is how we got here. Nothing against Oracle, but I suspect this significant wrinkle will have to get fleshed out.

Lawsuit by Bidder Company Led by Former Myspace Executive:  In a lawsuit in Florida federal court by TikTok Global LLC filed April 3, TikTok Global accuses ByteDance, TikTok Inc., and founder Yiming Zhang of sabotaging a $33 billion U.S.-based TikTok acquisition deal by engaging in fraud, antitrust violations, and breach of contract. TikTok Global LLC is led by Brad Greenberg the former MySpace executive and Internet entrepreneur. The factual allegations in the complaint start in 2020 with the executive order in Trump I, and alleges that:

This set the stage for what should have been a straightforward process of acquisition and divestment, but instead, it became a twisted tale of corporate intrigue, conspiracy, and antitrust violations….Plaintiff would soon discover, the game was rigged from the start because ByteDance had other plans, plans that circumvented proper procedures, stifled competition, and maintained ByteDance’s control over TikTok’s U.S. operations – all under the guise of compliance with the executive order.

The fact-heavy complaint alleges ByteDance misled regulators, misappropriated the “TikTok Global” brand, and conspired to maintain control of TikTok in violation of U.S. government directives. The suit brings six causes of action, including tortious interference and unjust enrichment, underscoring a complex clash over corporate deception and national security compliance. Emphasis on “alleged” as the case is pretty fact-dependent and plaintiff will have to prove their case, but the well-drafted complaint makes some extensive claims that may give a window into the behind the scenes in the world of Mr. Tok. Watch this space, it could be a sleeper that eventually wakes up to bite, no pun intended.

Oracle and Walmart: This proposal, which nearly closed in 2024 (I guess), involved a sale of TikTok’s U.S. business to a consortium of U.S.-based companies, with Oracle managing data security and infrastructure. ByteDance was to retain a minority stake in the new entity. However, this deal has not closed, who knows why aside from competition and then there’s those trade tariffs and the need for approval from both U.S. and Chinese regulators who have to be just so chummy right at the moment.

AppLovin: A preliminary bid has been submitted by AppLovin, an adtech company, to acquire TikTok’s U.S. operations. It appears that AppLovin’s offer includes managing TikTok’s user base and revenue model, with a focus on ad-driven strategies, although further negotiations are still required.  According to Pitchbook, “AppLovin is a vertically integrated advertising technology company that acts as a demand-side platform for advertisers, a supply-side platform for publishers, and an exchange facilitating transactions between the two. About 80% of AppLovin’s revenue comes from the DSP, AppDiscovery, while the remainder comes from the SSP, Max, and gaming studios, which develop mobile games. AppLovin announced in February 2025 its plans to divest from the lower-margin gaming studios to focus exclusively on the ad tech platform.”  It’s a public company trading as APP and seems to be worth about $100 billion.   Call me crazy, but I’m a bit suspicious of a public company with “lovin” in its name.  A bit groovy for the complexity of this negotiation, but you watch, they’ll get the deal.

Amazon and Blackstone: Amazon and Blackstone have also expressed interest in acquiring TikTok or a stake in a TikTok spinoff in Blackstone’s case. These offers would likely involve ByteDance retaining a minority interest in TikTok’s U.S. operations, though specifics of the terms remain unclear.  Remember, Blackstone owns HFA through SESAC.  So there’s that.

Frank McCourt/Project Liberty:  The “People’s Bid” for TikTok is spearheaded by Project Liberty, founded by Frank McCourt. This initiative aims to acquire TikTok and change its platform to prioritize user privacy, data control, and digital empowerment. The consortium includes notable figures such as Tim Berners-Lee, Kevin O’Leary, and Jonathan Haidt, alongside technologists and academics like Lawrence Lessig.  This one gives me the creeps as readers can imagine; anything with Lessig in it is DOA for me.

The bid proposes migrating TikTok to a new open-source protocol to address concerns raised by Congress while preserving its creative essence. As of now, the consortium has raised approximately $20 billion to support this ambitious vision.  Again, these people act like you can just put hundreds of millions of users on hold while this changeover happens.  I don’t think so, but I’m not as smart as these city fellers.

PRC’s Reaction

The People’s Republic of China (PRC) has strongly opposed the forced sale of TikTok’s U.S. operations, so there’s that. PRC officials argue that such a divestment would be a dangerous precedent, potentially harming Chinese tech companies’ international expansion. And they’re not wrong about that, it’s kind of the idea. Furthermore, the PRC’s position seems to be that any divestment agreement that involves the transfer of TikTok’s algorithm to a foreign entity requires Chinese regulatory approval.  Which I suspect would be DOA.

They didn’t just make that up– the PRC, through the Cyberspace Administration of China (CAC), owns a “golden share” in ByteDance’s main Chinese subsidiary. This 1% stake, acquired in 2021, grants the PRC significant influence over ByteDance including the ability to influence content and business strategies.

Unsurprisingly, ByteDance must ensure that the PRC government (i.e., the Chinese Communist Party) maintains control over TikTok’s core algorithm, a key asset for the company. PRC authorities have been clear that they will not approve any sale that results in ByteDance losing full control over TikTok’s proprietary technology, complicating the negotiations with prospective buyers.  

So a pressing question is whether TikTok without the algorithm is really TikTok from the users experience.  And then there’s that pesky issue of valuation—is TikTok with an unknown algo worth as much as TikTok with the proven, albeit awful, current algo.

Algorithm Lease Proposal

In an attempt to address both U.S. security concerns and the PRC’s objections, a novel solution has been proposed: leasing TikTok’s algorithm. Under this arrangement, ByteDance would retain ownership of the algorithm, while a U.S.-based company, most likely Oracle, would manage the operational side of TikTok’s U.S. business.

ByteDance would maintain control over its technology, while allowing a U.S. entity to oversee the platform’s operation within the U.S. The U.S. company would be responsible for ensuring compliance with U.S. data privacy laws and national security regulations, while ByteDance would continue to control its proprietary algorithm and intellectual property.

Under this leasing proposal, Oracle would be in charge of managing TikTok’s data security and ensuring that sensitive user data is handled according to U.S. regulations. This arrangement would allow ByteDance to retain its technological edge while addressing American security concerns regarding data privacy.

The primary concern is safeguarding user data rather than the algorithm itself. The proposal aims to address these concerns while avoiding the need for China’s approval of a full sale.

Now remember, the reason we are in this situation at all is that Chinese law requires TikTok to turn over on demand any data it gathers on TikTok users which I discussed on MTP back in 2020. The “National Intelligence Law” even requires TikTok to allow the PRC’s State Security police to take over the operation of TikTok for intelligence gathering purposes on any aspect of the users’ lives.  And if you wonder what that really means to the CCP, I have a name for you:  Jimmy Lai. You could ask that Hong Konger, but he’s in prison. 

This leasing proposal has sparked debate because it doesn’t seem to truly remove ByteDance’s influence over TikTok (and therefore the PRC’s influence). It’s being compared to “Project Texas 2.0,” a previous plan to secure TikTok’s data and operations.  I’m not sure how the leasing proposal solves this problem. Or said another way, if the idea is to get the PRC’s hands off of Americans’ user data, what the hell are we doing?

Next Steps

As the revised deadline approaches, I’d expect a few steps, each of which has its own steps within steps:

Finalization of a Deal: This is the biggest one–easy to say, nearly impossible to accomplish.  ByteDance will likely continue negotiating with interested parties while they snarf down user data, working to secure an agreement that satisfies both U.S. regulatory requirements and Chinese legal constraints. The latest extension provides runway for both sides to close key issues that are closable, particularly concerning the algorithm lease and ByteDance’s continued role in the business.

Operational Contingency:  I suppose at some point the buyer is going to be asked if whatever their proposal is will actually function and whether the fans will actually stick around to justify whatever the valuation is.  One of the problems with rich people getting ego involved in a fight over something they think is valuable is that they project all kinds of ideas on it that show how smart they are, only to find that once they get the thing they can’t actually do what they thought they would do.  By the time they figure out that it doesn’t work, they’ve moved on to the next episode in Short Attention Span Theater and it’s called Myspace.

China’s Approval: ByteDance will need to secure approval from PRC regulatory authorities for any deal involving the algorithm lease or a full divestment. So why introduce the complexity of the algo lease when you have to go through that step anyway?  Without PRC approval, any sale or lease of TikTok’s technology is likely dead, or at best could face significant legal and diplomatic hurdles.

Legal Action: If an agreement is not reached by the new deadline of July 1, 2025, further legal action could be pursued, either by ByteDance to contest the divestment order or by the U.S. government to enforce a ban on TikTok’s operations.  I doubt that President Trump is going to keep extending the deadline if there’s no significant progress.

If I were a betting man, I’d bet on the whole thing collapsing into a shut down and litigation, but watch this space.

[This post first appeared on MusicTech.Solutions]

@Artist Rights Institute Newsletter 3/31/25

The Artist Rights Institute’s news digest Newsletter from Artist Rights Watch.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

Ticketing

Executive Order on Combating Unfair Practices in the Live Entertainment Market

Music Industry reacts to Executive Order on Ticket Scalping (Bruce Houghton/Hypebot)

What Hath Trump Wrought: The Effect of the Anti-Scalping Executive Order on StubHub’s IPO (Chris Castle/MusicTech.Solutions)

StubHub IPO Filing

Copyright Litigation

Merlin sues TikTok rival Triller for breach of contract over allegedly unpaid music licensing fees (Daniel Tencer/Music Business Worldwide)

Artificial Intelligence: Legislation

Artificial intelligence firms should pay artists and musicians for using their work amid uproar over Labour’s plans to exempt them from copyright laws, according to a new poll of Brits (Chris Pollard/Daily Mail)

European Union’s latest draft AI Code of Practice renders copyright ‘meaningless,’ rightsholders warn (Mandy Dalugdug/Music Business Worldwide)

Artificial Intelligence
The Style Returns: Some notes on ChatGPT and Studio Ghibli
 (Andres Guadamuz/TechnoLlama) 

OpenAI’s Preemption Request Highlights State Laws’ Downsides (Oliver Roberts/Bloomberg Law)

Copyright: Termination Rights

Update on Vetter v. Resnik case (Chris Castle/MusicTechPolicy)

Conversation with @KCEsq and @MusicTechPolicy on a Songwriter Union, Better Royalties and Health Care for Songwriters

Forming a songwriter union is a hot topic these days, thank you Chappell Roan! Artist Rights Institute put a casual poll in the field to get a sense of what people are thinking about this issue. If you haven’t taken that poll yet, please join us on Survey Monkey here (all results are anonymized) we would love to get your feedback. We will post the results on Trichordist.

Reaction to the poll led to an Artist Rights Institute podcast with Chris Castle and Kevin Casini who both fans of the Trichordist audience, so naturally they wanted to launch the podcast here. There are a number of resources mentioned in the podcast that we have linked to below. Please leave comments if you have questions!

Check out the video with Kevin and Chris, and while you’re on the Artist Rights Institute’s cool new YouTube channel subscribe and bookmark the Artist Rights Symposium videos!

Important resources:

Union Organizing and Union Health Care Insurance Plans

National Labor Relations Board

AFL-CIO Organizing Institute

American Federation of Musicians

SAG-AFTRA

Health Care:

Health Alliance for Austin Musician http://www.myhaam.org Musician Services (512) 541-4226 (opt 2).

Music Health Alliance https://www.musichealthalliance.com Request assistance

American Association of Independent Music Benefits Store

Mental Health

SIMS Foundation (Austin) 512-494-1007

Industry-wide Agreements

See discussion of Canada’s Mechanical License Agreement https://musictechpolicy.com/2012/01/1…

Controlled Compositions

Copyright Office Circular on Work For Hire Explainer

Controlled Compositions Part 1 https://musictechpolicy.com/2010/03/2… and Controlled Compositions and Frozen Mechanicals https://musictechpolicy.com/2020/10/1…

We will be coming back to this topic soon. Feel free to leave comments if you have questions or want us to focus on any particular point.

Copyright 2025 Artist Rights Institute. All Rights Reserved. This video or any transcript may not be used for text or data mining or for the purpose of training artificial intelligence models or systems.

@ArtistRights Institute’s UK Government Comment on AI and Copyright: Why Can’t Creators Call 911?

We will be posting excerpts from the Artist Rights Institute’s comment in the UK’s Intellectual Property Office proceeding on AI and copyright. That proceeding is called a “consultation” where the Office solicits comments from the public (wherever located) about a proposed policy.

In this case it was the UK government’s proposal to require creators to “opt out” of AI data scraping by expanding the law in the UK governing “text and data mining” which is what Silicon Valley wants in a big way. This idea produced an enormous backlash from the creative community that we’ll also be covering in coming weeks as it’s very important that Trichordist readers be up to speed on the latest skulduggery by Big Tech in snarfing down all the world’s culture to train their AI (which has already happened and now has to be undone). For a backgrounder on the “text and data mining” controversy, watch this video by George York of the Digital Creators Coalition speaking at the Artist Rights Institute in DC.

In this section of the comment we offer a simple rule of thumb or policy guideline by which to measure the Government’s rules (which could equally apply in America): Can an artist file a criminal complaint against someone like Sam Altman?

If an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail

Why Can’t Creators Call 999 [or 911]?

We suggest a very simple policy guideline—if an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail.  Alternatively, if an artist can call the police and file a criminal complaint against a Sam Altman or a Sergei Brin for criminal copyright infringement, now we are getting somewhere.

This requires that there be a clear “red light/green light” instruction that can easily be understood and applied by a beat copper.  This may seem harsh, but in our experience with the trillion-dollar market cap club, the only thing that gets their attention is a legal action that affects behavior rather than damages.  Our experience suggests that what gets their attention most quickly is either an injunction to stop the madness or prison to punish the wrongdoing. 

As a threshold matter, it is clear that AI platforms intend to continue scraping all the world’s culture for their purposes without obtaining consent or notifying rightsholders.  It is likely that the bigger platforms already have.  For example, we have found our own writings included in CoPilot outputs.  Not only did we not consent to that use, but we were also never asked.  Moreover, CoPilot’s use of these works clearly violates our terms of service.  This level of content scraping is hardly what was contemplated with the “data mining” exceptions. 

Faux “Data Mining” is the Key that Picks the Lock of Human Expression

The Artist Rights Institute filed a comment in the UK Intellectual Property Office’s consultation on Copyright and AI that we drafted. The Trichordist will be posting excerpts from that comment from time to time.

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets. 

We strongly disagree that all the world’s culture can be squeezed through the keyhole of “data” to be “mined” as a matter of legal definitions.  In fact, a recent study by leading European scholars have found that data mining exceptions were never intended to excuse copyright infringement:

Generative AI is transforming creative fields by rapidly producing texts, images, music, and videos. These AI creations often seem as impressive as human-made works but require extensive training on vast amounts of data, much of which are copyright protected. This dependency on copyrighted material has sparked legal debates, as AI training involves “copying” and “reproducing” these works, actions that could potentially infringe on copyrights. In defense, AI proponents in the United States invoke “fair use” under Section 107 of the [US] Copyright Act [a losing argument in the one reported case on point[1]], while in Europe, they cite Article 4(1) of the 2019 DSM Directive, which allows certain uses of copyrighted works for “text and data mining.”

This study challenges the prevailing European legal stance, presenting several arguments:

1. The exception for text and data mining should not apply to generative AI training because the technologies differ fundamentally – one processes semantic information only, while the other also extracts syntactic information

2. There is no suitable copyright exception or limitation to justify the massive infringements occurring during the training of generative AI. This concerns the copying of protected works during data collection, the full or partial replication inside the AI model, and the reproduction of works from the training data initiated by the end-users of AI systems like ChatGPT….[2] 

Moreover, the existing text and data mining exception in European law was never intended to address AI scraping and training:

Axel Voss, a German centre-right member of the European parliament, who played a key role in writing the EU’s 2019 copyright directive, said that law was not conceived to deal with generative AI models: systems that can generate text, images or music with a simple text prompt.[3]

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets.  This lust for data, control and money will drive lobbyists and Big Tech’s amen corner to seek copyright exceptions under the banner of “innovation.”  Any country that appeases AI platforms in the hope of cashing in on tech at the expense of culture will be appeasing their way towards an inevitable race to the bottom.  More countries can be predictably expected to offer ever more accommodating terms in the face of Silicon Valley’s army of lobbyists who mean to engage in a lightning strike across the world.  The fight for the survival of culture is on.  The fight for survival of humanity may literally be the next one up.  

We are far beyond any reasonable definition of “text and data mining.”  What we can expect is for Big Tech to seek to distract both creators and lawmakers with inapt legal diversions such as trying to pretend that snarfing down all with world’s creations is mere “text and data mining”.  The ensuing delay will allow AI platforms to enlarge their training databases, raise more money, and further the AI narrative as they profit from the delay and capital formation.


[1] Thomson-Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., (Case No. 1:20-cv-00613 U.S.D.C. Del. Feb. 11, 2025) (Memorandum Opinion, Doc. 770 rejecting fair use asserted by defendant AI platform) available at https://storage.courtlistener.com/recap/gov.uscourts.ded.72109/gov.uscourts.ded.72109.770.0.pdf (“[The AI platform]’s use is not transformative because it does not have a ‘further purpose or different character’ from [the copyright owner]’s [citations omitted]…I consider the “likely effect [of the AI platform’s copying]”….The original market is obvious: legal-research platforms. And at least one potential derivative market is also obvious: data to train legal AIs…..Copyrights encourage people to develop things that help society, like [the copyright owner’s] good legal-research tools. Their builders earn the right to be paid accordingly.” Id. at 19-23).  See also Kevin Madigan, First of Its Kind Decision Finds AI Training Is Not Fair Use, Copyright Alliance (Feb. 12, 2025) available at https://copyrightalliance.org/ai-training-not-fair-use/ (discussion of AI platform’s landmark loss on fair use defense).

[2] Professor Tim W. Dornis and Professor Sebastian Stober, Copyright Law and Generative AI Training – Technological and Legal Foundations, Recht und Digitalisierung/Digitization and the Law (Dec. 20, 2024)(Abstract) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4946214

[3] Jennifer Rankin, EU accused of leaving ‘devastating’ copyright loophole in AI Act, The Guardian (Feb. 19, 2025) available at https://www.theguardian.com/technology/2025/feb/19/eu-accused-of-leaving-devastating-copyright-loophole-in-ai-act