Trump’s Historic Kowtow to Special Interests: Why Trump’s AI Executive Order Is a Threat to Musicians, States, and Democracy

There’s a new dance in Washington—it’s called the KowTow

Most musicians don’t spend their days thinking about executive orders. But if you care about your rights, your recordings, your royalties, or your community, or even the environment, you need to understand the Trump Administration’s new executive order on artificial intelligence. The order—presented as “Ensuring a National Policy Framework for AI”—is not a national standard at all. It is a blueprint for stripping states of their power, protecting Big Tech from accountability, and centralizing AI authority in the hands of unelected political operatives and venture capitalists. In other words, it’s business as usual for the special interests led by an unelected bureaucrat, Silicon Valley Viceroy and billionaire investor David Sacks who the New York Times recently called out as a walking conflict of interest.

You’ll Hear “National AI Standard.” That’s Fake News. IT’s Silicon valley’s wild west

Supporters of the EO claim Trump is “setting a national framework for AI.” Read it yourself. You won’t find a single policy on:
– AI systems stealing copyrights (already proven in court against Anthropic and Meta)
– AI systems inducing self-harm in children
– Whether Google can build a water‑burning data center or nuclear plant next to your neighborhood 

None of that is addressed. Instead, the EO orders the federal government to sue and bully states like Florida and Texas that pass AI safety laws and threatens to cut off broadband funding unless states abandon their democratically enacted protections. They will call this “preemption” which is when federal law overrides conflicting state laws. When Congress (or sometimes a federal agency) occupies a policy area, states lose the ability to enforce different or stricter rules. There is no federal legislation (EOs don’t count), so there can be no “preemption.”

Who Really Wrote This? The Sacks–Thierer Pipeline

This EO reads like it was drafted directly from the talking points of David Sacks and Adam Thierer, the two loudest voices insisting that states must be prohibited from regulating AI.  It sounds that way because it was—Trump himself gave all the credit to David Sacks in his signing ceremony.

– Adam Thierer works at Google’s R Street Institute and pushes “permissionless innovation,” meaning companies should be allowed to harm the public before regulation is allowed. 
– David Sacks is a billionaire Silicon Valley investor from South Africa with hundreds of AI and crypto investments, documented by The New York Times, and stands to profit from deregulation.

Worse, the EO lards itself with references to federal agencies coordinating with the “Special Advisor for AI and Crypto,” who is—yes—David Sacks. That means DOJ, Commerce, Homeland Security, and multiple federal bodies are effectively instructed to route their AI enforcement posture through a private‑sector financier.

The Trump AI Czar—VICEROY Without Senate Confirmation

Sacks is exactly what we have been warning about for months: the unelected Trump AI Czar

He is not Senate‑confirmed. 
He is not subject to conflict‑of‑interest vetting. 
He is a billionaire “special government employee” with vast personal financial stakes in the outcome of AI deregulation. 

Under the Constitution, you cannot assign significant executive authority to someone who never faced Senate scrutiny. Yet the EO repeatedly implies exactly that.

Even Trump’s MOST LOYAL MAGA Allies Know This Is Wrong

Trump signed the order in a closed ceremony with sycophants and tech investors—not musicians, not unions, not parents, not safety experts, not even one Red State governor.

Even political allies and activists like Mike Davis and Steve Bannon blasted the EO for gutting state powers and centralizing authority in Washington while failing to protect creators. When Bannon and Davis are warning you the order goes too far, that tells you everything you need to know. Well, almost everything.

And Then There’s Ted Cruz

On top of everything else, the one state official in the room was U.S. Senator Ted Cruz of Texas, a state that has led on AI protections for consumers. Cruz sold out Texas musicians while gutting the Constitution—knowing full well exactly what he was doing as a former Supreme Court clerk.

Why It Matters for Musicians

AI isn’t some abstract “tech issue.” It’s about who controls your work, your rights, your economic future. Right now:

– AI systems train on our recordings without consent or compensation. 
– Major tech companies use federal power to avoid accountability. 
– The EO protects Silicon Valley elites, not artists, fans or consumers. 

This EO doesn’t protect your music, your rights, or your community. It preempts local protections and hands Big Tech a federal shield.

It’s Not a National Standard — It’s a Power Grab

What’s happening isn’t leadership. It’s *regulatory capture dressed as patriotism*. If musicians, unions, state legislators, and everyday Americans don’t push back, this EO will become a legal weapon used to silence state protections and entrench unaccountable AI power.

What David Sacks and his band of thieves is teaching the world is that he learned from Dot Bomb 1.0—the first time around, they didn’t steal enough. If you’re going to steal, steal all of it. Then the government will protect you.


Who’s Really Fighting for Fans? A Closer Look at the DOJ/FTC Ticketing Consultation

The Department of Justice and Federal Trade Commission were directed by President Trump to conduct an investigation into ticket scalping pursuant to Executive Order 14254 “Combating Unfair Practices in the Live Entertainment Market.”

This led directly to both agencies inviting public comments on the state of the live event ticketing market—an industry riddled with speculation, opacity, and middlemen who seem to make money without ever attending a show. Over 4000 artists, fans, economists, state attorneys general, and industry veterans all weighed in. And the record reveals something important particularly regarding resellers: there’s a rising consensus that the resellers are engaged in some really shady practices designed for one purpose–to extract as much money as possible from fans and artists without regard to the damage it does to the entire artist-fan relationship.

Over the next several posts, I’ll be highlighting individual comments submitted to the DOJ/FTC inquiry. Some are technical, some personal, and some blisteringly direct—but all speak to the fundamental imbalance between artists, fans, and the multi-layered resellers, bots, and platforms that profit from both ends of the transaction.

This isn’t just about high prices. It’s about ownership, transparency, control, and accountability and the lenders who fuel the fraud. Many of the commenters argue that ticketing is no longer just a marketplace—it’s a manipulated, closed-loop ecosystem in which the reseller’s house always wins. And for too long, the architects of that system have claimed there’s nothing to see here. There is plenty to see here.

Each post in this series will spotlight one of these submissions that I have selected—not just to amplify the voices that took time to respond, but to help connect the dots on how the ticketing industry got here, who’s benefiting, and what needs to change.

We all have to be grateful to Kid Rock who brought this debacle to President Trump’s attention and to the President himself for making it a priority. We also have to thank Senator Marsha Blackburn for her continued defense of artists through her BOTS Act co-sponsored with Senator Blumenthal. Senator Blackburn has long opposed the use of automated fraudster systems to extract rents from fans and artists and we hope that the DOJ/FTC inquiry will also shed light on why there have been so few prosecutions.

Stay tuned for the first in the series. Spoiler alert: it’s going to be hard to argue that this is a “free market” when fans are bidding against bots and artists are not allowed to control the face value of their own shows. 

This is a summary of a lot of the more involved issues that came up in the comments:

1. Speculative Ticket Listings

Resellers frequently list tickets for sale without possessing them, misleading consumers and inflating prices. These listings distort market data and should be treated as deceptive under federal consumer protection law.

2. Price Manipulation Through Bots

Automated bots are used to hoard tickets and create artificial scarcity, driving up resale prices. This not only violates the BOTS Act but enables unfair competition that harms consumers.

3. Deceptive Use of Venue, Artist, or Promoter Branding

Resellers often use official names and branding in ads, URLs, and metadata as well as typosquatting or URL hacking to trick consumers into believing they are purchasing from authorized sources. These deceptive practices undermine market transparency.

4. Misleading “Sold Out” or Urgency Claims

Some platforms advertise that events are “sold out” or create false urgency (e.g., “only 2 left at this price”) when primary tickets are still available. These tactics constitute false advertising and manipulative marketing.

5. Concealment of Total Ticket Cost 

Fees are often hidden until checkout, misleading consumers about the true price. This “drip pricing” violates FTC guidance on transparent pricing and impairs consumers’ ability to comparison shop.

6. Resale of Non-Transferable or Restricted Tickets

Resellers list tickets that are explicitly non-transferable or designated will-call only, often in violation of the event organizer’s terms. Consumers risk being denied entry without recourse.

7. Lack of Delivery Guarantees and Refund Accountability

Many platforms offer no guaranteed delivery or refund protection when tickets are invalid or undelivered—despite charging substantial markups—leaving consumers with no remedy.

8. One-Sided Arbitration and Waiver Clauses

Some resale platforms impose forced arbitration clauses and class action waivers, effectively denying consumers access to meaningful remedies, even in cases of systemic fraud.

9. Failure to Disclose Broker Status or Ticket Quantities

Platforms often fail to identify brokers or disclose the number of tickets held, undermining market transparency and the ability of venues and regulators to detect fraud or hoarding.

10. Bankruptcy as a Shield Against Accountability

Resellers may use bankruptcy to discharge obligations arising from fraudulent or deceptive conduct. Congress should consider amendments to make such claims nondischargeable, similar to fraud-based exceptions under 11 U.S.C. § 523(a).

11. Federal RICO Liability for Coordinated BOTS Act Violations

The use of automated ticket-buying tools in coordinated schemes between resellers and bot developers may give rise to federal RICO charges under 18 U.S.C. §§ 1961–1968. The following are three plausible RICO predicates when tied to a pattern of violations:

   (a) Wire Fraud (18 U.S.C. § 1343): Automated bulk purchases made using false identities or obfuscated IP addresses may constitute wire fraud if they involve misrepresentations in interstate commerce.

   (b) Access Device Fraud (18 U.S.C. § 1029): Bot schemes often involve unauthorized use of payment cards, CAPTCHA bypass tools, or ticket platform credentials, qualifying as trafficking in access devices.

   (c) Computer Fraud and Abuse (18 U.S.C. § 1030): Bypassing ticket site security measures may amount to unauthorized access under the CFAA, particularly when done for commercial advantage.

These acts, when carried out by a coordinated enterprise, support civil or criminal RICO enforcement, particularly where repeat violations and intent to defraud can be established.

The AI Safe Harbor is an Unconstitutional Violation of State Protections for Families and Consumers

By Chris Castle

The AI safe harbor slavered onto President Trump’s “big beautiful bill” is layered with intended consequences. Not the least of these is the affect on TikTok.

One of the more debased aspects of TikTok (and that’s a long list) is their promotion through their AI driven algorithms of clearly risky behavior to their pre-teen audience. Don’t forget: TikTok’s algorithm is not just any algorithm. The Chinese government claims it as a state secret. And when the CCP claims a state secret they ain’t playing. So keep that in mind.

One of these risky algorithms that was particularly depraved was called the “Blackout Challenge.” The TikTok “blackout challenge” has been linked to the deaths of at least 20 children over an 18-month period. One of the dead children was Nylah Anderson. Nylah’s mom sued TikTok for her daughter because that’s what moms do. If you’ve ever had someone you love hang themselves, you will no doubt agree that you live with that memory every day of your life. This unspeakable tragedy will haunt Nylah’s mother forever.

Even lowlifes like TikTok should have settled this case and it should never have gotten in front of a judge. But no–TikTok tried to get out of it because Section 230. Yes, that’s right–they killed a child and tried to get out of the responsibility. The District Court ruled that the loathsome Section 230 applied and Nylah’s mom could not pursue her claims. She appealed.

The Third Circuit Court of Appeals reversed and remanded, concluding that “Section 230 immunizes only information ‘provided by another’” and that “here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”

So…a new federal proposal threatens to slam the door on these legal efforts: the 10-year artificial intelligence (AI) safe harbor recently introduced in the House Energy and Commerce Committee. If enacted, this safe harbor would preempt state regulation of AI systems—including the very algorithms and recommendation engines that Nylah’s mom and other families are trying to challenge. 

Section 43201(c) of the “Big Beautiful Bill” includes pork, Silicon Valley style, entitled the “Artificial Intelligence and Information Technology Modernization Initiative: Moratorium,” which states:

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.

The “Initiative” also appropriates “$500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems….” So not only did Big Tech write themselves a safe harbor for their crimes, they also are taking $500,000,000 of corporate welfare to underwrite it courtesy of the very taxpayers they are screwing over. Step aside Sophocles, when it comes to tragic flaws, Oedipus Rex got nothing on these characters.

Platforms like TikTok, YouTube, and Instagram use AI-based recommendation engines to personalize and optimize content delivery. These systems decide what users see based on a combination of behavioral data, engagement metrics, and predictive algorithms. While effective for keeping users engaged, these AI systems have been implicated in promoting harmful content—ranging from pro-suicide material to dangerous ‘challenges’ that have directly resulted in injury or death.

Families across the country have sued these companies, alleging that the AI-driven algorithms knowingly promoted hazardous content to vulnerable users. In many cases, the claims are based on state consumer protection laws, negligence, or wrongful death statutes. Plaintiffs argue that the companies failed in their duty to design safe systems or to warn users about foreseeable dangers. These cases are not attacks on free speech or user-generated content; they focus specifically on the design and operation of proprietary AI systems. 

If you don’t think that these platforms are depraved enough to actually raise safe harbor defenses, just remember what they did to Nylah’s mom–raised the exceptionally depraved Section 230 as a defense to their responsibility in the death of a child.

The AI safe harbor would prohibit states from enacting or enforcing any law that regulates AI systems or automated decision-making technologies for the next 10 years. This sweeping language could easily be interpreted to cover civil liability statutes that hold platforms accountable for the harms their AI systems cause. This is actually even worse than the vile Section 230–the safe harbor would be expressly targeting actual state laws. Maybe after all the appeals, say 20 years from now, we’ll find out that the AI safe harbor is unconstitutional commandeering, but do we really want to wait to find out?

Because these wrongful death lawsuits rely on arguments that an AI algorithm caused harm—either through its design or its predictive content delivery—the companies could argue that the moratorium shields them from liability. They might claim that the state tort claims are an attempt to “regulate” AI in violation of the federal preemption clause. If courts agree, these lawsuits could be dismissed before ever reaching a jury.

This would create a stunning form of corporate immunity even beyond the many current safe harbors for Big Tech: tech companies would be free to deploy powerful, profit-driven AI systems with no accountability in state courts, even when those systems lead directly to preventable deaths. 

The safe harbor would be especially devastating for families who have already suffered tragic losses and are seeking justice. These families rely on state wrongful death laws to hold powerful platforms accountable. Removing that path to accountability would not only deny them closure, but also prevent public scrutiny of the algorithms at the center of these tragedies.

States have long held the authority to define standards of care and impose civil liability for harms caused by negligence or defective products. The moratorium undermines this traditional role by barring states from addressing the specific risks posed by AI systems, even in the context of established tort principles. It would represent one of the broadest federal preemptions of state law in modern history—in the absence of federal regulation of AI platforms.

• In Pennsylvania, the parents of a teenager who committed suicide alleged that Instagram’s algorithmic feed trapped their child in a cycle of depressive content.
• Multiple lawsuits filed under consumer protection and negligence statutes in states like New Jersey, Florida, and Texas seek to hold platforms accountable for designing algorithms that systematically prioritize engagement over safety.
• TikTok faced multiple class action multidistrict litigation claims it illegally harvested user information from its in-app browser.

All of such suits could be in jeopardy if courts interpret the AI moratorium as barring state laws that impose liability on algorithm-driven systems and you can bet that Big Tech platforms will litigate the bejeezus out of the issue. Even if the moratorium was not intended to block wrongful death and other state law claims, its language may be broad enough to do so in practice—especially when leveraged by well-funded corporate legal teams.

Even supporters of federal AI regulation should be alarmed by the breadth of this safe harbor. It is not a thoughtful national framework based on a full record, but a shoot-from-the-hip blanket prohibition on consumer protection and civil justice. By freezing all state-level responses to AI harms, the AI safe harbor is intent on consolidating power in the hands of federal bureaucrats and corporate lobbyists, leaving ordinary Americans with fewer options for recourse, not to mention a clear violation of state police powers and the 10th Amendment.

To add insult to injury, the use of reconciliation to pass this policy—without full hearings, bipartisan debate, or robust public input—only underscores the cynical nature of the strategy. It has nothing to do with the budget aside from the fact that Big Tech is snarfing down $500 million of taxpayer money for no good reason just so they can argue their land grab is “germane” to shoehorn it into reconciliation under the Byrd Rule. It’s a maneuver designed to avoid scrutiny and silence dissent, not to foster a responsible or democratic conversation about how AI should be governed.

At its core, the AI safe harbor is not about fostering innovation—it is about shielding tech platforms from accountability just like the DMCA, Section 230 and Title I of the Music Modernization Act. By preempting state regulation, it could block families from using long-standing wrongful death statutes to seek justice for the loss of their children and laws protecting Americans from other harms. It undermines the sovereignty of states, the dignity of grieving families, and the public’s ability to scrutinize the AI systems that increasingly shape our lives. 

Congress must reject this overreach, and the American public must remain vigilant in demanding transparency, accountability, and justice. The Initiative must go.

[A version of this post first appeared on MusicTechPolicy]

Big Beautiful AI Safe Harbor asks If David Sacks wants to Make America Screwed Again?

In a dramatic turn of events, Congress is quietly advancing a 10-year federal safe harbor for Big Tech that would block any state and local regulation of artificial intelligence (AI). That safe harbor would give Big Tech another free ride on the backs of artists, authors, consumers, all of us and our children. It would stop cold the enforcement of state laws to protect consumers like the $1.370 billion dollar settlement Google reached with the State of Texas last week for grotesque violations of user privacy. The bill would go up on Big Tech’s trophy wall right next to the DMCA, Section 230 and Title I of the Music Modernization Act.

Introduced through the House Energy and Commerce Committee as part of a broader legislative package branded with President Trump’s economic agenda, this safe harbor would prevent states from enforcing or enacting any laws that address the development, deployment, or oversight of AI systems. While couched as a measure to ensure national uniformity and spur innovation, this proposal carries serious consequences for consumer protection, data privacy, and state sovereignty. It threatens to erase hard-fought state-level protections that shield Americans from exploitative child snooping, data scraping, biometric surveillance, and the unauthorized use of personal and all creative works. This post unpacks how we got here, why it matters, and what can still be done to stop it.

The Origins of the New Safe Harbor

The roots of the latest AI safe harbor lie in a growing push from Silicon Valley-aligned political operatives and venture capital influencers, many of whom fear a patchwork of state-level consumer protection laws that would stop AI data scraping. Among the most vocal proponents is tech entrepreneur-turned White House crypto czar David Sacks, who has advocated for federal preemption of state AI rules in order to protect startup innovation from what he and others call regulatory overreach also known as state “police powers” to protect state residents.

If my name was “Sacks” I’d probably be a bit careful about doing things that could get me fired. His influence reportedly played a role in shaping the safe harbor’s timing and language, leveraging connections on Capitol Hill to attach it to a larger pro-business package of legislation. That package—marketed as a pillar of President Trump’s economic plan—was seen as a convenient vehicle to slip through controversial provisions with minimal scrutiny. You know, let’s sneak one past the boss.

Why This Is Dangerous for Consumers and Creators

The most immediate danger of the AI safe harbor is its preemption of state protections at a time when AI technologies are accelerating unchecked. States like California, Illinois, and Virginia have enacted—or are considering—laws to limit how companies use AI to analyze facial features, scan emails, extract audio, or mine creative works from social media. The AI mantra is that they can snarf down “publicly available data” which essentially means everything that’s not behind a paywall. Because there is no federal AI regulation yet, state laws are crucial for protecting vulnerable populations, including children whose photos and personal information are shared by parents online. Under the proposed AI safe harbor, such protections would be nullified for 10 years–and don’t think it won’t be renewed.

Without the ability to regulate AI at the state level, we could see our biometric data harvested without consent. Social media posts—including photos of babies, families, and school events—could be scraped and used to train commercial AI systems without transparency or recourse. Creators across all copyright categories could find their works ingested into large language models and generative tools without license or attribution. Emails and other personal communications could be fed into AI systems for profiling, advertising, or predictive decision-making without oversight.

While federal regulation of AI is certainly coming this AI safe harbor includes no immediate substitute. Instead, it freezes state level regulatory development entirely for a decade—an eternity in the technology world—during which time the richest companies in the history of commerce can entrench themselves further with little fear of accountability. And it likely will provide a blueprint for federal legislation when it comes.

A Strategic Misstep for Trump’s Economic Agenda: Populism or Make America Screwed Again?

Ironically, attaching the moratorium to a legislative package meant to symbolize national renewal may ultimately undermine the very populist and sovereignty-based themes that President Trump has championed. By insulating Silicon Valley firms from state scrutiny, the legislation effectively prioritizes the interests of data-rich corporations over the privacy and rights of ordinary Americans. It hands a victory to unelected tech executives and undercuts the authority of governors, state legislators, and attorney generals who have stepped in where federal law has lagged behind. So much for that states are “laboratories of democracy” jazz.

Moreover, the manner in which the safe harbor was advanced legislatively—slipped into what is supposed to be a reconciliation bill without extensive hearings or stakeholder input—is classic pork and classic Beltway maneuvering in smoke filled rooms. Critics from across the political spectrum have noted that such tactics cheapen the integrity of any legislation they touch and reflect the worst of Washington horse-trading.

What Can Be Done to Stop It

The AI safe harbor is not a done deal. There are several procedural and political tools available to block or remove it from the broader legislative package.

1. Committee Intervention – Lawmakers on the House Energy and Commerce Committee or the Rules Committee can offer amendments to strip or revise the moratorium before it proceeds to the full House.
2. House Floor Action – Opponents of the moratorium can offer floor amendments during debate to strike the provision. This requires coordination and support from members across both parties.
3. Senate “Byrd Rule” Challenge and Holds – Because reconciliation bills must be budget-related, the Senate Parliamentarian can strike the safe harbor if it’s deemed “non-germane” which it certainly seems to be. Senators can formally raise this challenge.
4. Conference Committee Negotiation – If different versions of the legislation pass the House and Senate, the final language will be hashed out in conference. There is still time to remove the moratorium here.
5. Public Advocacy – Artists, parents, consumer advocates, and especially state officials can apply pressure through media, petitions, and direct outreach to lawmakers, highlighting the harms and democratic risks of federal preemption. States may be able to sue to block the safe harbor as unconstitutional (see Chris’s discussion of constitutionality) but let’s not wait to get to that point. It must be said that any such litigation poses a threat to Trump’s “Big Beautiful Bill” courtesy of David Sacks.

Conclusion

The AI safe harbor may have been introduced quietly, but there’s a growing backlash from all corners. Its consequences would be anything but subtle. If enacted, it would freeze innovation in AI accountability, strip states of their ability to protect residents, and expose Americans to widespread digital exploitation. While marketed as pro-innovation, the safe harbor looks more like a gift to data-hungry monopolies at the expense of federalist principles and individual rights.

It’s not too late to act, but doing so requires vigilance, transparency, and an insistence that even the most powerful Big Tech oligarchs remain subject to democratic oversight.

@ArtistRights Newsletter 4/14/25

The Artist Rights Watch podcast returns for another season! This week’s episode features AI Legislation, A View from Europe: Helienne Lindvall, President of the European Composer and Songwriter Alliance (ECSA) and ARI Director Chris Castle in conversation regarding current issues for creators regarding the EU AI Act and the UK Text and Data Mining legislation. Download it here or subscribe wherever you get your audio podcasts.

New Survey for Songwriters: We are surveying songwriters about whether they want to form a certified union. Please fill out our short Survey Monkey confidential survey here! Thanks!

AI Litigation: Kadrey v. Meta

Law Professors Reject Meta’s Fair Use Defense in Friend of the Court Brief

Ticketing
Viagogo failing to prevent potentially unlawful practices, listings on resale site suggest that scalpers are speculatively selling tickets they do not yet have (Rob Davies/The Guardian)

ALEC Astroturf Ticketing Bill Surfaces in North Carolina Legislation

ALEC Ticketing Bill Surfaces in Texas to Rip Off Texas Artists (Chris Castle/MusicTechPolicy)

International AI Legislation

Brazil’s AI Act: A New Era of AI Regulation (Daniela Atanasovska and Lejla Robeli/GDPR Local)

Why robots.txt won’t get it done for AI Opt Outs (Chris Castle/MusicTechPolicy)

Feature TranslationHow has the West’s misjudgment of China’s AI ecosystem distorted the global technology competition landscape (Jeffrey Ding/ChinAI)

Unethical AI Training Harms Creators and Society, Argues AI Pioneer (Ed Nawotka/Publishers Weekly) 

AI Ethics

Céline Dion Calls Out AI-Generated Music Claiming to Feature the Iconic Singer Without Her Permission (Marina Watts/People)

Splice CEO Discusses Ethical Boundaries of AI in Music​ (Nilay Patel/The Verge)

Spotify’s Bold AI Gamble Could Disrupt The Entire Music Industry (Bernard Marr/Forbes)

Books

Apple in China: The Capture of the World’s Greatest Company by Patrick McGee (Coming May 13)

PRESS RELEASE: @Human_Artistry Campaign Endorses NO FAKES Act to Protect Personhood from AI

For Immediate Release

HUMAN ARTISTRY CAMPAIGN ENDORSES NO FAKES ACT

Bipartisan Bill Reintroduced by Senators Blackburn, Coons, Tillis, & Klobuchar and Representatives Salazar, Dean, Moran, Balint and Colleagues

Create New Federal Right for Use of Voice and Visual Likeness
in Digital Replicas

Empowers Artists, Voice Actors, and Individual Victims to Fight Back Against
AI Deepfakes and Voice Clones

WASHINGTON, DC (April 9, 2025) – Amid global debate over guardrails needed for AI, the Human Artistry Campaign today announced its support for the reintroduced “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2025” (“NO FAKES Act”) – landmark legislation giving every person an enforceable new federal intellectual property right in their image and voice. 

Building off the original NO FAKES legislation introduced last Congress, the updated bill was reintroduced today by Senators Marsha Blackburn (R-TN), Chris Coons (D-DE), Thom Tillis (R-NC), Amy Klobuchar (D-MN) alongside Representatives María Elvira Salazar (R-FL-27), Madeleine Dean (D-PA-4), Nathaniel Moran (R-TX-1), and Becca Balint (D-VT-At Large) and bipartisan colleagues.

The legislation sets a strong federal baseline protecting all Americans from invasive AI-generated deepfakes flooding digital platforms today. From young students bullied by non-consensual sexually explicit deepfakes to families scammed by voice clones to recording artists and performers replicated to sing or perform in ways they never did, the NO FAKES Act provides powerful remedies requiring platforms to quickly take down unconsented deepfakes and voice clones and allowing rights​​holders to seek damages from creators and distributors of AI models designed specifically to create harmful digital replicas.

The legislation’s thoughtful, measured approach preserves existing state causes of action and rights of publicity, including Tennessee’s groundbreaking ELVIS Act. It also contains carefully calibrated exceptions to protect free speech, open discourse and creative storytelling – without trampling the underlying need for real, enforceable protection against the vast range of invasive and harmful deepfakes and voice clones.

Human Artistry Campaign Senior Advisor Dr. Moiya McTier released the following statement in support of the legislation:

​“The Human Artistry Campaign stands for preserving essential qualities of all individuals – beginning with a right to their own voice and image. The NO FAKES Act is an important step towards necessary protections that also support free speech and AI development. The Human Artistry Campaign commends Senators Blackburn, Coons, Tillis, and Klobuchar and Representatives Salazar, Dean, Moran, Balint, and their colleagues for shepherding bipartisan support for this landmark legislation, a necessity for every American to have a right to their own identity as highly realistic voice clones and deepfakes become more pervasive.

Dr. Moiya McTier, Human Artistry Campaign Senior Advisor

By establishing clear rules for the new federal voice and image right, the NO FAKES Act will power innovation and responsible, pro-human uses of powerful AI technologies while providing strong protections for artists, minors and others. This important bill has cross-sector support from Human Artistry Campaign members and companies such as OpenAI, Google, Amazon, Adobe and IBM. The NO FAKES Act is a strong step forward for American leadership that erects clear guardrails for AI and real accountability for those who reject the path of responsibility and consent.

Learn more & let your representatives know Congress should pass NO FAKES Act here.

​# # #

ABOUT THE HUMAN ARTISTRY CAMPAIGN: The Human Artistry Campaign is the global initiative for the advancement of responsible AI – working to ensure it develops in ways that strengthen the creative ecosystem, while also respecting and furthering the indispensable value of human artistry to culture. Across 34 countries, more than 180 organizations have united to protect every form of human expression and creative endeavor they represent – journalists, recording artists, photographers, actors, songwriters, composers, publishers, independent record labels, athletes and more. The growing coalition champions seven core principles for keeping human creativity at the center of technological innovation. For further information, please visit humanartistrycampaign.com

TikTok Sale Extended…Again

By Chris Castle

Imagine if the original Napster had received TikTok-level attention from POTUS?  Forget I said that.  The ongoing divestment of TikTok from its parent company ByteDance has reached yet another critical point with yet another bandaid.  Congress originally set a January 19, 2025 deadline for ByteDance to either sell TikTok’s U.S. operations or face a potential ban in the United States as part of the Protecting Americans from Foreign Adversary Controlled Applications Act or “PAFACA” (I guess “covfefe” was taken). The US Supreme Court upheld that law in TikTok v. Garland.

When January 20 came around, President Trump gave ByteDance an extension to April 5, 2025 by executive order. When that deadline came, President Trump granted an extension to the extension to the January 19 deadline by another executive order, providing additional time for ByteDance to finalize a deal to divest. The extended deadline now pushes the timeline for divestment negotiations to July 1, 2025.

This new extension is designed to allow for further negotiation time among ByteDance, potential buyers, and regulatory authorities, while addressing the ongoing trade issues and concerns raised by both the U.S. and Chinese governments. 

It’s getting mushy, but I’ll take a stab at the status of the divestment process. I might miss someone as they’re all getting into the act. 

I would point out that all these bids anticipate a major overhaul in how TikTok operates which—just sayin’—means it likely would no longer be TikTok as its hundreds of millions of users now know it.  I went down this path with Napster, and I would just say that it’s a very big deal to change a platform that has inherent legal issues into one that satisfies a standard that does not yet exist.  I always used the rule of thumb that changing old Napster to new Napster (neither of which had anything to do with the service that eventually launched with the “Napster” brand but bore no resemblance to original Napster or its DNA) would result in an initial loss of 90% of the users. Just sayin’.

Offers and Terms

Multiple parties have expressed interest in acquiring TikTok’s U.S. operations, but the terms of these offers remain fluid due to ongoing negotiations and the complexity of the deal. Key bidders include:

ByteDance Investors:  According to Reuters, “the biggest non-Chinese investors in parent company ByteDance to up their stakes and acquire the short video app’s U.S. operations.” This would involve Susquehanna International Group, General Atlantic, and KKR. ByteDance looks like it retains a minority ownership position of less than 20%, which I would bet probably means 19.99999999% or something like that. Reuters describes this as the front runner bid, and I tend to buy into that characterization. From a cap table point of view, this would be the cleanest with the least hocus pocus. However, the Reuters story is based on anonymous sources and doesn’t say how the deal would address the data privacy issues (other than that Oracle would continue to hold the data), or the algorithm. Remember, Oracle has been holding the data and that evidently has been unsatisfactory to Congress which is how we got here. Nothing against Oracle, but I suspect this significant wrinkle will have to get fleshed out.

Lawsuit by Bidder Company Led by Former Myspace Executive:  In a lawsuit in Florida federal court by TikTok Global LLC filed April 3, TikTok Global accuses ByteDance, TikTok Inc., and founder Yiming Zhang of sabotaging a $33 billion U.S.-based TikTok acquisition deal by engaging in fraud, antitrust violations, and breach of contract. TikTok Global LLC is led by Brad Greenberg the former MySpace executive and Internet entrepreneur. The factual allegations in the complaint start in 2020 with the executive order in Trump I, and alleges that:

This set the stage for what should have been a straightforward process of acquisition and divestment, but instead, it became a twisted tale of corporate intrigue, conspiracy, and antitrust violations….Plaintiff would soon discover, the game was rigged from the start because ByteDance had other plans, plans that circumvented proper procedures, stifled competition, and maintained ByteDance’s control over TikTok’s U.S. operations – all under the guise of compliance with the executive order.

The fact-heavy complaint alleges ByteDance misled regulators, misappropriated the “TikTok Global” brand, and conspired to maintain control of TikTok in violation of U.S. government directives. The suit brings six causes of action, including tortious interference and unjust enrichment, underscoring a complex clash over corporate deception and national security compliance. Emphasis on “alleged” as the case is pretty fact-dependent and plaintiff will have to prove their case, but the well-drafted complaint makes some extensive claims that may give a window into the behind the scenes in the world of Mr. Tok. Watch this space, it could be a sleeper that eventually wakes up to bite, no pun intended.

Oracle and Walmart: This proposal, which nearly closed in 2024 (I guess), involved a sale of TikTok’s U.S. business to a consortium of U.S.-based companies, with Oracle managing data security and infrastructure. ByteDance was to retain a minority stake in the new entity. However, this deal has not closed, who knows why aside from competition and then there’s those trade tariffs and the need for approval from both U.S. and Chinese regulators who have to be just so chummy right at the moment.

AppLovin: A preliminary bid has been submitted by AppLovin, an adtech company, to acquire TikTok’s U.S. operations. It appears that AppLovin’s offer includes managing TikTok’s user base and revenue model, with a focus on ad-driven strategies, although further negotiations are still required.  According to Pitchbook, “AppLovin is a vertically integrated advertising technology company that acts as a demand-side platform for advertisers, a supply-side platform for publishers, and an exchange facilitating transactions between the two. About 80% of AppLovin’s revenue comes from the DSP, AppDiscovery, while the remainder comes from the SSP, Max, and gaming studios, which develop mobile games. AppLovin announced in February 2025 its plans to divest from the lower-margin gaming studios to focus exclusively on the ad tech platform.”  It’s a public company trading as APP and seems to be worth about $100 billion.   Call me crazy, but I’m a bit suspicious of a public company with “lovin” in its name.  A bit groovy for the complexity of this negotiation, but you watch, they’ll get the deal.

Amazon and Blackstone: Amazon and Blackstone have also expressed interest in acquiring TikTok or a stake in a TikTok spinoff in Blackstone’s case. These offers would likely involve ByteDance retaining a minority interest in TikTok’s U.S. operations, though specifics of the terms remain unclear.  Remember, Blackstone owns HFA through SESAC.  So there’s that.

Frank McCourt/Project Liberty:  The “People’s Bid” for TikTok is spearheaded by Project Liberty, founded by Frank McCourt. This initiative aims to acquire TikTok and change its platform to prioritize user privacy, data control, and digital empowerment. The consortium includes notable figures such as Tim Berners-Lee, Kevin O’Leary, and Jonathan Haidt, alongside technologists and academics like Lawrence Lessig.  This one gives me the creeps as readers can imagine; anything with Lessig in it is DOA for me.

The bid proposes migrating TikTok to a new open-source protocol to address concerns raised by Congress while preserving its creative essence. As of now, the consortium has raised approximately $20 billion to support this ambitious vision.  Again, these people act like you can just put hundreds of millions of users on hold while this changeover happens.  I don’t think so, but I’m not as smart as these city fellers.

PRC’s Reaction

The People’s Republic of China (PRC) has strongly opposed the forced sale of TikTok’s U.S. operations, so there’s that. PRC officials argue that such a divestment would be a dangerous precedent, potentially harming Chinese tech companies’ international expansion. And they’re not wrong about that, it’s kind of the idea. Furthermore, the PRC’s position seems to be that any divestment agreement that involves the transfer of TikTok’s algorithm to a foreign entity requires Chinese regulatory approval.  Which I suspect would be DOA.

They didn’t just make that up– the PRC, through the Cyberspace Administration of China (CAC), owns a “golden share” in ByteDance’s main Chinese subsidiary. This 1% stake, acquired in 2021, grants the PRC significant influence over ByteDance including the ability to influence content and business strategies.

Unsurprisingly, ByteDance must ensure that the PRC government (i.e., the Chinese Communist Party) maintains control over TikTok’s core algorithm, a key asset for the company. PRC authorities have been clear that they will not approve any sale that results in ByteDance losing full control over TikTok’s proprietary technology, complicating the negotiations with prospective buyers.  

So a pressing question is whether TikTok without the algorithm is really TikTok from the users experience.  And then there’s that pesky issue of valuation—is TikTok with an unknown algo worth as much as TikTok with the proven, albeit awful, current algo.

Algorithm Lease Proposal

In an attempt to address both U.S. security concerns and the PRC’s objections, a novel solution has been proposed: leasing TikTok’s algorithm. Under this arrangement, ByteDance would retain ownership of the algorithm, while a U.S.-based company, most likely Oracle, would manage the operational side of TikTok’s U.S. business.

ByteDance would maintain control over its technology, while allowing a U.S. entity to oversee the platform’s operation within the U.S. The U.S. company would be responsible for ensuring compliance with U.S. data privacy laws and national security regulations, while ByteDance would continue to control its proprietary algorithm and intellectual property.

Under this leasing proposal, Oracle would be in charge of managing TikTok’s data security and ensuring that sensitive user data is handled according to U.S. regulations. This arrangement would allow ByteDance to retain its technological edge while addressing American security concerns regarding data privacy.

The primary concern is safeguarding user data rather than the algorithm itself. The proposal aims to address these concerns while avoiding the need for China’s approval of a full sale.

Now remember, the reason we are in this situation at all is that Chinese law requires TikTok to turn over on demand any data it gathers on TikTok users which I discussed on MTP back in 2020. The “National Intelligence Law” even requires TikTok to allow the PRC’s State Security police to take over the operation of TikTok for intelligence gathering purposes on any aspect of the users’ lives.  And if you wonder what that really means to the CCP, I have a name for you:  Jimmy Lai. You could ask that Hong Konger, but he’s in prison. 

This leasing proposal has sparked debate because it doesn’t seem to truly remove ByteDance’s influence over TikTok (and therefore the PRC’s influence). It’s being compared to “Project Texas 2.0,” a previous plan to secure TikTok’s data and operations.  I’m not sure how the leasing proposal solves this problem. Or said another way, if the idea is to get the PRC’s hands off of Americans’ user data, what the hell are we doing?

Next Steps

As the revised deadline approaches, I’d expect a few steps, each of which has its own steps within steps:

Finalization of a Deal: This is the biggest one–easy to say, nearly impossible to accomplish.  ByteDance will likely continue negotiating with interested parties while they snarf down user data, working to secure an agreement that satisfies both U.S. regulatory requirements and Chinese legal constraints. The latest extension provides runway for both sides to close key issues that are closable, particularly concerning the algorithm lease and ByteDance’s continued role in the business.

Operational Contingency:  I suppose at some point the buyer is going to be asked if whatever their proposal is will actually function and whether the fans will actually stick around to justify whatever the valuation is.  One of the problems with rich people getting ego involved in a fight over something they think is valuable is that they project all kinds of ideas on it that show how smart they are, only to find that once they get the thing they can’t actually do what they thought they would do.  By the time they figure out that it doesn’t work, they’ve moved on to the next episode in Short Attention Span Theater and it’s called Myspace.

China’s Approval: ByteDance will need to secure approval from PRC regulatory authorities for any deal involving the algorithm lease or a full divestment. So why introduce the complexity of the algo lease when you have to go through that step anyway?  Without PRC approval, any sale or lease of TikTok’s technology is likely dead, or at best could face significant legal and diplomatic hurdles.

Legal Action: If an agreement is not reached by the new deadline of July 1, 2025, further legal action could be pursued, either by ByteDance to contest the divestment order or by the U.S. government to enforce a ban on TikTok’s operations.  I doubt that President Trump is going to keep extending the deadline if there’s no significant progress.

If I were a betting man, I’d bet on the whole thing collapsing into a shut down and litigation, but watch this space.

[This post first appeared on MusicTech.Solutions]

Keynote and Speaker Update for Nov. 20 @ArtistRights Symposium

We’re pleased to announce the speakers for the 4th annual Artist Rights Symposium on November 20, this year hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  The symposium is also supported by the Artist Rights Institute and was founded by Dr. David Lowery, Lecturer at the University of Georgia Terry College of Business. 

The four panels will begin at 8:30 am and end by 5 pm, with lunch and refreshments. More details to follow. Contact the Artist Rights Institute for any questions.

Admission is free, but please reserve a spot with Eventbrite, seating is limited! (Eventbrite works best with Firefox)

Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.  Graham will speak around lunchtime.

The confirmed symposium panel topics and speakers are:

THE TROUBLE WITH TICKETS:  The Economics and Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas

SHOW ME THE CREATOR – Transparency Requirements for AI Technology, moderated by Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It, moderated by Chris Castle

NAME, IMAGE AND LIKENESS RIGHTS IN THE AGE OF AI:  Current initiatives to protect creator rights and attribution, moderated by John Simson, Program Director Emeritus, Business & Entertainment, Kogod School of Business, American University

Additional confirmed speakers to be announced soon.

Fired for Cause:  @RepFitzgerald Asks for Conditional Redesignation of the MLC

By Chris Castle

U.S. Representative Scott Fitzgerald joined in the MLC review currently underway and sent a letter to Register of Copyrights Shira Perlmutter on August 29 regarding operational and performance issues relating to the MLC.  The letter was in the context of the five year review for “redesignation” of The MLC, Inc. as the mechanical licensing collective.  (That may be confusing because of the choice of “The MLC” as the name of the operational entity that the government permits to run the mechanical licensing collective.  The main difference is that The MLC, Inc. is an entity that is “designated” or appointed to operationalize the statutory body.  The MLC, Inc. can be replaced.  The mechanical licensing collective (lower case) is the statutory body created by Title I of the Music Modernization Act) and it lasts as long as the MMA is not repealed or modified. Unlikely, but we live in hope.)

I would say that songwriters probably don’t have anything more important to do today in their business beyond reading and understanding Rep. Fitzgerald’s excellent letter.

Rep. Fitzgerald’s letter is important because he proposes that the MLC, Inc. be given a conditional redesignation, not an outright redesignation.  In a nutshell, that is because Rep. Fitzgerald raises many…let’s just say “issues”…that he would like to see fixed before committing to another five years for The MLC, Inc.  As a member of the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet, Rep. Fitzgerald’s point of view on this subject must be given added gravitas.

In case you’re not following along at home, the Copyright Office is currently conducting an operational and performance review of The MLC, Inc. to determine if it is deserving of being given another five years to operate the mechanical licensing collective.  (See Periodic Review of the Mechanical Licensing Collective and the Digital Licensee Coordinator (Docket 2024-1), available at https://www.copyright.gov/rulemaking/mma-designations/2024/.)

The redesignation process may not be quickly resolved.  It is important to realize that the Copyright Office is not obligated to redesignate The MLC, Inc. by any particular deadline or at all.  It is easy to understand that any redesignation might be contingent on The MLC, Inc. fixing certain…issues…because the redesignation rulemaking is itself an operational and performance review.  It is also easy to understand that the Copyright Office might need to bring in some technical and operational assistance in order to diligence its statutory review obligations.  This could take a while.

Let’s consider the broad strokes of Rep. Fitzgerald’s letter.

Budget Transparency

Rep. Fitzgerald is concerned with a lack of candor and transparency in The MLC, Inc.’s annual report among other things. If you’ve read the MLC’s annual reports, you may agree with me that the reports are long on cheerleading and short on financial facts.  It’s like The MLC, Inc. thought they were answering the question “How can you tolerate your own awesomeness?”   That question is not on the list.  Rep. Fitzgerald says “Unfortunately, the current annual report lacks key data necessary to examine the MLC’s ability to execute these authorities and functions.”  He then goes on to make recommendations for greater transparency in future annual reports.

I agree with Rep. Fitzgerald that these are all important points.  I disagree with him slightly about the timing of this disclosure.  These important disclosures need not be prospective–they could be both prospective and retroactive. I see no reason at all why The MLC, Inc. cannot be required to revise all of its four annual reports filed to date (https://www.themlc.com/governance) in line with this expanded criteria.  I am just guessing, but the kind of detail that Rep. Fitzgerald is focused on are really just data that any business would accumulate or require in the normal course of prudently operating its business.  That suggests to me that there is no additional work required in bringing The MLC, Inc. into compliance; it’s just a matter of disclosure.

There is nothing proprietary about that disclosure and there is no reason to keep secrets about how you handle other people’s money.  It is important to recognize that The MLC, Inc. only handles other people’s money.  It has no revenue because all of the money under its management comes from either royalties that belong to copyright owners or operating capital paid by the services that use the blanket license.  It should not be overlooked that the services rely on the MLC and it has a duty to everyone to properly handle the funds. The MLC, Inc. also operates at the pleasure of the government, so it should not be heard to be too precious about information flow, particularly information related to its own operational performance. Those duties flow in many directions.

Board Neutrality

The board composition of the mechanical licensing collective (and therefore The MLC, Inc.) is set by Congress in Title I.  It should come as no surprise to anyone that the major publishers and their lobbyists who created Title I wrote themselves a winning hand directly into the statute itself.  (And FYI, there is gambling at Rick’s American Café, too.)  As Rep. Fitzgerald says:  

Of the 14 voting members, ten are comprised of music publishers and four are songwriters. Publishers were given a majority of seats in order to assist with the collective’s primary task of matching and distributing royalties. However, the MMA did not provide this allocation in order to convert the MLC into an extension of the music publishers.

I would argue with him about that, too, because I believe that’s exactly what the MMA was intended to do by those who drafted it who also dictated who controlled the pen.  This is a rotten system and it was obviously on its way to putrefaction before the ink was dry.

For context, Section 8 of the Clayton Act, one of our principal antitrust laws, prohibits interlocking boards on competitor corporations.  I’m not saying that The MLC, Inc. has a Section 8 problem–yet–but rather that interlocking boards is a disfavored arrangement by way of understanding Rep. Fitzgerald’s issue with The MLC, Inc.’s form of governance:

Per the MMA, the MLC is required to maintain an independent board of directors. However, what we’ve seen since establishing the collective is anything but independent. For example, in both 2023 and 2024, all ten publishers represented by the voting members on the MLC Board of Directors were also members of the NMPA’s board.  This not only raises questions about the MLC’s ability to act as a “fair” administrator of the blanket license but, more importantly, raises concerns that the MLC is using its expenditures to advance arguments indistinguishable from those of the music publishers-including, at times, arguments contrary to the positions of songwriters and the digital streamers.

Said another way, Rep. Fitzgerald is concerned that The MLC, Inc. is acting very much like HFA did when it was owned by the NMPA.  That would be HFA, the principal vendor of The MLC, Inc. (and that dividing line is blurry, too).

It is important to realize that the gravamen of Rep. Fitzgerald’s complaint (as I understand it) is not solely with the statute, it is with the decisions about how to interpret the statute taken by The MLC, Inc. and not so far countermanded by the Copyright Office in its oversight role.  That’s the best news I’ve had all day.  This conflict and competition issue is easily solved by voluntary action which could be taken immediately (with or without changing the board composition).  In fact, given the sensitivity that large or dominant corporations have about such things, I’m kind of surprised that they walked right into that one.  The devil may be in the details, but God is in the little things.

Investment Policy

Rep. Fitzgerald is also concerned about The MLC, Inc.’s “investment policy.”  Readers will recall that I have been questioning both the provenance and wisdom of The MLC, Inc. unilaterally deciding that it can invest the hundreds of millions in the black box in the open market.  I personally cannot find any authority for such a momentous action in the statute or any regulation.  Rep. Fitzgerald also raises questions about the “investment policy”:

Further, questions remain regarding the MLC’s investment policy by which it may invest royalty and assessment funds. The MLC’s Investment Policy Statement provides little insight into how those funds are invested, their market risk, the revenue generated from those investments, and the percentage of revenue (minus fees) transferred to the copyright owner upon distribution of royalties. I would urge the Copyright Office to require more transparency into these investments as a condition of redesignation.

It should be obvious that The MLC, Inc.’s “investment policy” has taken on a renewed seriousness and can no longer be dodged.

Black Box

It should go without saying that fair distribution of unmatched funds starts with paying the right people.  Not “connect to collect” or “play your part” or any other sloganeering.  Tracking them down. Like orphan works, The MLC, Inc. needs to take active measures to find the people to whom they owe money, not wait for the people who don’t know they are owed to find out that they haven’t been paid.  

Although there are some reasonable boundaries on a cost/benefit analysis of just how much to spend on tracking down people owed small sums, it is important to realize that the extraordinary benefits conferred on digital services by the Music Modernization Act, safe harbors and all, justifies higher expectations of those same services in finding the people they owe money.  The MLC, Inc. is uniquely different than its counterparts in other countries for this reason.

I tried to raise the need for increased vigilance at the MLC during a Copyright Office roundtable on the MMA. I was startled that the then-head of DiMA (since moved on) had the brass to condescend to me as if he had ever paid a royalty or rendered a royalty statement.  I was pointing out that the MLC was different than any other collecting society in the world because the licensees pay the operating costs and received significant legal benefits in return. Those legal benefits took away songwriters’ fundamental rights to protect their interests through enforcing justifiable infringement actions which is not true in other countries. 

In countries where the operating cost of their collecting society is deducted from royalties, it is far more appropriate for that society to consider a more restrictive cost/benefit analysis when expending resources to track down the songwriters they owe. This is particularly true when no black box writer is granting nonmonetary consideration like a safe harbor whether they know it or not.

I got an earful from this person about how the services weren’t an open checkbook to track down people they owed money to (try that argument when failing to comply with Know Your Customer laws).  Grocers know more about ham sandwiches than digital services know about copyright owners. The general tone was that I should be grateful to Big Daddy and be more careful how I spend my lunch money. And yes I do resent this paternalistic response which I’m sorry to say was not challenged by the Copyright Office lawyer presiding who shortly thereafter went to work for Spotify.  Nobody ever asked for an open check.  I just asked that they make a greater effort than the effort that got Spotify sued a number of times resulting in over $50 million in settlements, a generous accommodation in my view. If anyone should be grateful, it is the services who should be grateful, not the songwriters.

And yet here we are again in the same place.  Except this time the services have a safe harbor against the entire world which I believe has value greater than the operating costs of the MLC.  I’d be perfectly happy to go back to the way it was before the services got everything they wanted and then some in Title I of the MMA, but I bet I won’t get any takers on that idea.

Instead, I have to congratulate Rep. Fitzgerald for truly excellent work product in his letter and for framing the issue exactly as it should be posed.  Failing to fix these major problems should result in no redesignation—fired for cause.

[This post first appeared in MusicTech.Solutions]

Weekly Recap & News Sunday Dec 2, 2012

Grab the coffee!

Recent Posts:
* Lars Was First And Lars Was Right
* Zoë Keating’s Request for Internet Transparency met w/ usual Hypocrisy
* The Most Important Fact Academics and The Copyleft Neglect to Mention: Copyright is Optional.
* Giving Thanks for Creators Rights and Copyright
* Congressional Research Service Memo on Constitutionality of IRFA Section 5
* Other Than That Mr Westergren, How Was The Play? IRFA Gets An Ass Whupping
* Or Pandora Could Add Another Minute Of Advertising And Raise Their Revenue 50%
* Video of the “Radio Active” panel at The Future of Music Summit 2012.
* The Internet Radio Fairness Act’s Attack on Free Speech
* This photo says it all
* Google’s Serial Obfuscation: Music Canada,BPI, Billboard Question Whether Google Has Really Lowered Pirate Sites Search Rankings
* IRFA is the Broadcast Industry’s SOPA. Censors Free Speech
* IRFA and the Future of Music Policy Summit: Why Would FOMC Miss An Opportunity to Defend Artist Rights?

IRFA-APLOOZA:

Seeking Alpha :
* The Internet Radio Fairness Act Will Fail

Ars Technica :
* Pandora’s Internet radio bill hits a wall of opposition in Congress

CNET :
* Pandora’s Web radio bill is doomed — well, for now

House Judiciary Committee – Video of the Hearing:
* Music Licensing Part One: Legislation in the 112th Congress

WELL, THIS IS EMBARRASSING – OOOPSIES! THE RSC’s FICTIONAL LOOK AT COPYRIGHT IS RECALLED IN LESS THAN 24 HRS:

Techdirt:
* House Republicans: Copyright Law Destroys Markets; It’s Time For Real Reform
* That Was Fast: Hollywood Already Browbeat The Republicans Into Retracting Report On Copyright Reform

Precursor Blog:
* The Copyright Education of Mr. Khanna — Part 2 Defending First Principles Series

Copyhype:
* Republican Study Committee Policy Brief on Copyright: Part 1
* Republican Study Committee Policy Brief on Copyright: Part 2

Music Tech Policy:
* Critiquing The “Free Culture” Book Report or “The Copyright Education of Mr. Khanna”

FROM AROUND THE WEB:

Mercury News:
* German lawmakers call Google campaign ‘cheap propaganda’

“The campaign initiated by Google is cheap propaganda,” said conservative lawmakers Guenter Krings and Ansgar Heveling.

“Under the guise of a supposed project for the freedom of the internet, an attempt is being made to coopt its users for its own lobbying,” the two said in a statement.

Stereogum:
* Deconstructing: Pandora, Spotify, Piracy, And Getting Artists Paid

Pitchfork:
* Making Cents – Damon Krukowski of Galaxie 500 and Damon & Naomi breaks down the meager royalties currently being paid out to bands by streaming services and explains what the music business’ headlong quest for capital means for artists today.

The Cynical Musician:
* Reco’nize: The Original Cynical Musician (Lars Ulrich)

Billboard:
* Songwriters Are Left Out of Pandora’s Royalty Plan: Guest Post by Downtown Music’s Justin Kalifowitz

The National Review Online:
* Myths and Facts about Copyright

VoxIndie:
* How Are Google’s Anti-Piracy Search Policies Working?

Digital Music News:
* We’ve Written Some of the Biggest Songs In History. And This Is What Pandora Pays Us…
* If You Stream a Song Once a Day, When Does It Pay the Same As a Download?
* My Song Was Played 3.1 Million Times on Pandora. My Check Was $39…
* Finally: A Solution for Pandora’s Financial Problems…

Torrent Freak:
* IMAGiNE BitTorrent Piracy Group “Sysop” Jailed 40 months
* BitTorrent Site Owners Fear European Domain Name Seizures
* Canada Set For Mass BitTorrent Lawsuits, Anti-Piracy Company Warns

Music Tech Policy:
* The Artists, United, Can Never Be Defeated
* Too Big to Fix Part 1: YouTube’s Thimblerig, or What’s Inside Your Black Box Today Mr. Schmidt?

Copyhype:
* Friday’s Endnotes – 11/30/12
* A Brief History of Webcaster Royalties
* The Purposes of Copyright Law and “Anti-Copyright” Arguments

Worth an encore, Lars Ulrich predicts the demise of Artists Rights to Internet Robber Barrons in 2000 on The Charlie Rose Show.