Mýa Backs AMFA as Momentum Builds for Fair Pay for Radio Play

L-R: SX CEO Mike Huppe, Mya, House Minority Leader Rep. Hakeem Jeffries

Momentum around the American Music Fairness Act is building, and that’s a good thing. When Michael Huppe says artists not being paid for terrestrial radio airplay is “flat out wrong,” he’s right. The American Music Fairness Act (AMFA) closes the loop on Congress’s work beginning in 1995 to create a digital performance right in sound recordings. It extends that framework to terrestrial radio, ensuring artists and sound recording owners are paid consistently across platforms while preserving protections for small and local broadcasters.

The U.S. remains an outlier globally, denying performers a basic neighboring right recognized nearly everywhere else. Mýa’s presence underscores what’s at stake: real artists, real livelihoods. AMFA is about correcting a structural imbalance—one that has allowed broadcasters to monetize recordings without compensating those who made them. We appreciate the growing number of leaders in Congress working to get this right.

For more information on the American Music Fairness Act and the broader policy effort to align U.S. law with global norms, see the musicFIRST Coalition. They track the legislation, outline the issues, and provide a way to stay informed or engage if you choose.

@RonanFarrow and @AndrewMarantz: Sam Altman May Control Our Future—Can He Be Trusted?

Ronan Farrow and Andrew Marantz investigate Sam Altman’s leadership of OpenAI, based on internal documents and more than 100 interviews. They center on a core tension: Altman has positioned himself as a steward of humanity’s most powerful technology, yet many colleagues and insiders question whether he can be trusted with that responsibility. Internal memos compiled by senior figures—including chief scientist Ilya Sutskever—allege a pattern of misleading statements and evasiveness, particularly around AI safety and governance.  Shocking, ain’t it?

The piece traces OpenAI’s evolution from a nonprofit founded to prioritize safety over profit into a commercially driven company pursuing massive scale and valuation. Along the way, Altman is portrayed as highly ambitious, politically savvy, and willing to push boundaries—sometimes at the expense of transparency or institutional safeguards. 

It also situates these concerns within the broader stakes of artificial general intelligence: if such systems emerge, the individuals controlling them could wield unprecedented global power. The article ultimately raises an unresolved question—whether the rapid centralization of technological authority in a single leader and company is compatible with the level of trust and accountability that such power demands.

Read it on the New Yorker.

Victory in the Vetter v. Resnik Lawsuit: Artist Rights, Songwriter Advocacy, and the Power of Termination

At the center of Vetter v. Resnik are songwriters reclaiming what Congress promised them in a detailed and lengthy legislative negotiation over the 1976 revision to the Copyright Act—a meaningful second chance to terminate what the courts call “unremunerative transfers,” aka crappy deals. That principle comes into sharp focus through Cyril Vetter, whose perseverance brought this case to the Fifth Circuit, and Cyril’s attorney Tim Kappel, whose decades-long advocacy for songwriter rights helped frame the issues not as abstractions, but as lived realities.

Cyril won his case against his publisher at trial in a landmark judicial ruling by Chief Judge Shelly Dick. His publisher appealed Judge Dick’s ruling to the Fifth Circuit. As readers will remember, oral arguments in the case were earlier this year. A bunch of songwriter and author groups including the Artist Rights Institute filed “friend of the court” briefs in the case in favor of Cyril.

In a unanimous opinion, the United States Court of Appeals for the Fifth Circuit affirmed Judge Shelly Dick’s carefully reasoned trial-court ruling, holding that when an author terminates a worldwide grant, the recapture also worldwide. It is not artificially limited to U.S. territory only, which had been the industry practice. The court understood that anything less would hollow out Congress’s intent.

It is often said that the whole point of the termination law is to give authors (including songwriters) a “second bite at the apple”. Which is why the Artist Rights Institute wrote (and was quoted by the 5th Circuit) that limiting the reversion to US rights only is a “second bite at half the apple” which was the opposite of Congressional intent.

25-30108-2026-01-12Download



What made this 5th Circuit decision especially meaningful for the creative community is that the Fifth Circuit did not reach it in a vacuum. Writing for the panel, Judge Carl Stewart expressly quoted the Artist Rights Institute amicus brief, observing:

“Denying terminating authors the full return of a worldwide grant leaves them with only half of the apple—the opposite of congressional intent.”

That sentence—simple, vivid, and unmistakably human—captured what this case has always been about.

ARI.Amicus.Vetter.Final RDownload



The Artist Rights Institute’s amicus brief did not appear overnight. It grew out of a longstanding relationship between songwriter advocate Tim Kappel and Chris Castle, a collaboration shaped over many years by shared concern for how statutory rights actually function—or fail to function—for creators in the real world.

When the Vetter appeal crystallized the stakes, that history mattered. It allowed ARI to move quickly, confidently, and with credibility—translating dense statutory language into a narrative to help courts understand that termination rights are supposed to restore leverage, not preserve a publisher’s foreign control veto through technicalities.

Crucially, the brief was inspired and strengthened by the voices of songwriter advocates and heirs, including Abby North (heir of composer Alex North), Blake Morgan (godson of songwriter Lesley Gore), and Angela Rose White (heir of legendary music director David Rose) and of course David Lowery and Nikki Rowling. The involvement of these heirs ensured the court understood context—termination is not merely about renegotiating deals for living authors. It is often about families, estates, and heirs—people for whom Congress explicitly preserved termination rights as a matter of intergenerational fairness.

The Fifth Circuit’s opinion reflects that understanding. By rejecting a cramped territorial reading of termination, the court avoided a result that would have undermined heirs’ rights just as surely as authors’ rights.

Vetter v. Resnik represents a rare and welcome alignment: an author willing to press his statutory rights all the way, advocates who understood the lived experience behind those rights, a district judge who took Congress at its word, and an appellate court willing to say plainly that “half of the apple” is not enough.

For the Artist Rights Institute, it was an honor to participate—to stand alongside Cyril Vetter, Tim Kappel, and the community of songwriter advocates and heirs whose experiences shaped a brief that helped the court see the full picture.

And for artists, songwriters, and their families, the decision stands as a reminder that termination rights mean what Congress said they mean—a real chance to reclaim ownership, not an illusion bounded by geography.

2026 Music Predictions: The Legal and Policy Fault Lines Ahead

By Chris Castle

I was grateful to Hypebot for publishing my 2026 music‑industry predictions, which focused on the legal and structural pressures already reshaping the business. For regular readers, I’m reposting those predictions here—and adding a few more that follow directly from the policy work, regulatory engagement, and royalty‑system scrutiny we’ve been immersed in over the past year with the Artist Rights Institute. These additional observations are less about trend‑spotting and more about where the underlying legal and institutional logic appears to be heading next.

1. AI Copyright Litigation Will Move From Abstract Theory to Operational Discovery

In 2026, the center of gravity in AI‑copyright cases will shift toward discovery that exposes how models are trained, weighted, filtered, and monetized. Courts will increasingly treat AI systems as commercial products rather than research experiments, and discovery fights for the good of humanity…ahem…rather than summary judgment rhetoric. The result will be pressure on platforms to settle, license, or restructure before full disclosure occurs particularly since it’s becoming increasingly likely that every frontier AI lab as ripped off the world’s culture the old fashioned way—they stole it off the Internet.

The next round of AI copyright litigation will come from fans: As more deals are done with AI like the Disney/Sora deal, fans who use Sora or other AI to create separatable rights with AI (like new characters, new story lines) or even new universes with old story lines (like maybe new versions of the Luke/Darth/Hans/Leia arc in the Old West) will start to get the idea that their IP is…well…their IP. If it’s used without compensating them or getting their permission, that whole copyright thing is going to start to get real for them.

2. Streaming Platforms Will Face Structural Payola Scrutiny, Not Just Royalty Complaints

Minimum‑payment thresholds, bundled offerings, and “greater‑of” formulas will no longer be treated as isolated business choices. Regulators and courts will begin to examine how these mechanisms function together to shift risk onto artists while preserving platform margins. Antitrust, consumer‑protection, and unfair‑competition theories will increasingly converge around the same conduct. Due to Spotify’s market dominance and intimidation factor for majors and big to medium sized independent labels, these cases will have to come from independent artists.

3. The Copyright Office Will Approve a Conditional Redesignation of the MLC

Rather than granting an unconditional redesignation of the Mechanical Licensing Collective, the Copyright Office is likely to impose conditions tied to governance, transparency, and financial stewardship. This approach allows continuity for licensees while asserting supervisory authority grounded in the statute. The message will be clear: designation is provisional, not permanent.

Digital-Licensing-Coordinator-to-USCO-2-Sept-22-2025Download

4. The MLC’s Gundecked Investment Policy Will Be Unwound or Materially Rewritten

The practice of investing unmatched royalties as a pooled asset is becoming legally and politically indefensible. In 2026, expect the investment policy to be unwound or rewritten by new regulations to require pass‑through of gains, or strict capital‑preservation limits. Once framed as a fiduciary issue rather than a finance strategy, the current model cannot survive intact.

It’s also worth noting that the MLC’s investment portfolio has grown so large ($1.212 billion) that its investment income reported on its 2023 tax return has also grown to an amount in excess of its operating costs as measured by the administrative assessment paid by licensees.

5. An MLC Independent Royalty‑Accounting and Systems Review Will Become Inevitable

As part of a conditional redesignation, the Copyright Office may require an end‑to‑end operational review of the MLC by a top‑tier royalty‑accounting firm. Unlike a SOC report, such a review would examine whether matching, data logic, and distributions actually produce correct outcomes. Once completed, that analysis would shape litigation, policy reform, and future oversight.

6. Foreign CMOs Will Push Toward Licensee‑Pays Models

Outside the U.S., collective management organizations face rising technology costs and political scrutiny over compensation. In response, many will explore shifting more costs to licensees rather than members, reframing CMOs as infrastructure providers. Ironically, the U.S. MLC experiment may accelerate this trend abroad given the MLC’s rich salaries and vast resources for developing poorly implemented tech.

These developments are not speculative in the abstract. They follow from incentives already in motion, records already being built, and institutions increasingly unable to rely on deference alone.

7.  Environmental Harms of AI Become a Core Climate Issue

We will start to see the AI labs normalize the concept of private energy generation on a massive scale to support data centers built in current green spaces.  If they build or buy electric plants they do not intend to share.  This whole thing about they will build small nuclear reactors and sell excess back to the local grid is crazy—there won’t be any excess and what about their behavior over the last 25 years makes you think they’ll share a thing?

So some time after Los Angeles rezones Griffith Park commercial and sells the Greek Theater to Google for a new data center and private nuclear reactor and Facebook buys the Diablo Canyon reactor, the Music Industry Climate Collective will formally integrate AI’s ecological footprint into their national and international policy agendas. After mounting evidence of data‑center water depletion, aquifer stress, and grid destabilization — particularly in drought‑prone regions — climate coalitions will conceptually reclassify AI infrastructure as a high‑impact industrial activity.

This will become acute after people realize they cannot expect the state or federal government to require new state permitting regimes because of the overwhelming political influence of Big Tech in the form of AI Viceroy-for-Life David Sacks. (He’s not going anywhere in a post-Trump era.). This will lead to environmental‑justice litigation over siting decisions and pressure to require reporting of AI‑related energy, water, and land use.

8.  Criminal RICO Case Against StubHub and Affiliated Resale Networks

By late 2026, the Department of Justice brings a landmark criminal RICO indictment targeting StubHub‑linked reseller networks and individual reseller financiers for systemic ticketing fraud and money laundering. The enterprise theory alleges that major resellers, platform intermediaries, lenders, and bot‑operators coordinated to engage in wire fraud, market manipulation, speculative ticketing, and deceptive consumer practices at international scale. Prosecutors present evidence of an organized structure that used bots, fabricated scarcity, misrepresentation of seat availability, and price‑fixing algorithms to inflate profits.

This becomes the first major criminal RICO prosecution in the secondary‑ticketing economy and triggers parallel state‑level investigations and civil RICO suits. Public resellers like StubHub will face shareholder lawsuits and securities fraud allegations.

Just another bright sunshiny day.

[A version of this post first appeared on MusicTechPolicy]




NYT: Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

This image has an empty alt attribute; its file name is image-9.png

The New York Times published a sprawling investigation into David Sacks’s role as Trump’s A.I. and crypto czar. We’ve talked about David Sacks a few times on these pages. The Times’ piece is remarkable in scope and reporting: a venture capitalist inside the White House, steering chip policy, promoting deregulation, raising money for Trump, hosting administration events through his own podcast brand, and retaining hundreds of A.I. and crypto investments that stand to benefit from his policy work.

But for all its detail, the Times buried the lede.

The bigger story isn’t just ethics violations. or outright financial corruption. It’s that Sacks is simultaneously shaping and shielding the largest regulatory power grab in history: the A.I. moratorium and its preemption structure.

Of all the corrupt anecdotes in the New York Times must read article regarding Viceroy and leading Presidential pardon candidate David Sacks, they left out the whole AI moratorium scam, focusing instead on the more garden variety of self-dealing and outright conflicts of interest that are legion. My bet is that Mr. Sacks reeks so badly that it is hard to know what to leave out. Here’s a couple of examples:

This image has an empty alt attribute; its file name is image-10.png

There is a deeper danger that the Times story never addresses: the long-term damage that will outlive David Sacks himself. Even if Sacks eventually faces investigations or prosecution for unrelated financial or securities matters — if he does — the real threat isn’t what happens to him. It’s what happens to the legal architecture he is building right now.

This image has an empty alt attribute; its file name is sacks-american-flag.jpg

If he succeeds in blocking state-law prosecutions and freezing A.I. liability for a decade, the harms won’t stop when he leaves office. They will metastasize.

Without state enforcement, A.I. companies will face no meaningful accountability for:

  • child suicide induced by unregulated synthetic content
  • mass copyright theft embedded into permanent model weights
  • biometric and voiceprint extraction without consent
  • data-center sprawl that overwhelms local water, energy, and zoning systems
  • surveillance architectures exported globally
  • algorithmic harms that cannot be litigated under preempted state laws

These harms don’t sunset when an administration ends. They calcify. It must also be said that Sacks could face state securities-law liability — including fraud, undisclosed self-dealing, and market-manipulative conflicts tied to his A.I. portfolio — because state blue-sky statutes impose duties possibly stricter than federal law. The A.I. moratorium’s preemption would vaporize these claims, shielding exactly the conduct state regulators are best positioned to police. No wonder he’s so committed to sneaking it into federal law.

The moratorium Sacks is pushing would prevent states from acting at the very moment when they are the only entities with the political will and proximity to regulate A.I. on the ground. If he succeeds, the damage will last long after Sacks has left his government role — long after his podcast fades, long after his investment portfolio exits, long after any legal consequences he might face.

The public will be living inside the system he designed.

There is one final point the public needs to understand. DavidSacksis not an anomaly. Sacks is to Trump what Eric Schmidt was to Biden: the industry’s designated emissary, embedded inside the White House to shape federal technology policy from the inside out. Swap the party labels and the personnel change, but the structural function remains the same. Remember, Schmidt bragged about writing the Biden AI executive order.

This image has an empty alt attribute; its file name is of-all-the-ceos-google-interviewed-eric-schmidt-was-the-only-one-that-had-been-to-burning-man-which-was-a-major-plus.jpg

So don’t think that if Sacks is pushed out, investigated, discredited, or even prosecuted one day — if he is — that the problem disappears. You don’t eliminate regulatory capture by removing the latest avatar of it. The next administration will simply install a different billionaire with a different portfolio and the same incentives: protect industry, weaken oversight, preempt the states, and expand the commercial reach of the companies they came in with.

The danger is not David Sacks the individual. The danger is the revolving door that lets tech titans write national A.I. policy while holding the assets that benefit from it. As much as Trump complains of the “deep state,” he’s doing his best to create the deepest of deep states.

Until that underlying structure changes, it won’t matter whether it’s Sacks, Schmidt, Thiel, Musk, Palihapitiya, or the next “technocratic savior.”

The system will keep producing them — and the public will keep paying the price. For as Sophocles taught us, it is not in our power to escape the curse.

There Is No ‘Right to Train’: How AI Labs Are Trying to Manufacture a Safe Harbor for Theft

Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about “transformative fair use.” Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will.  That doesn’t mean they won’t keep trying.

A “right to train” is not found anywhere in the Copyright Act or any other law.  It’s also not found in court cases on fair-use that the AI lobby leans on. It’s a slogan and it’s spin, not a statute. What we’re watching is a coordinated effort by the major AI labs to manufacture a safe harbor through litigation — using every favorable fair-use ruling to carve out what looks like a precedent for blanket immunity.  Then they’ll get one of their shills in Congress or a state legislature to introduce legislation as though a “right to train” was there all along.

How the “Right to Train” Narrative Took Shape

The phrase first appeared in tech-industry briefs and policy papers describing model training as a kind of “machine learning fair use.” The logic goes like this: since humans can read a book and learn from it, a machine should be able to “learn” from the same book without permission.

That analogy collapses under scrutiny. First of all, humans typically bought the book they read or checked it out from a library.  Humans don’t make bit-for-bit copies of everything they read, and they don’t reproduce or monetize those copies at global scale. AI training does exactly that — storing expressive works inside model weights, then re-deploying them to generate derivative material.

But the repetitive chant of the term “right to train” serves a purpose: to normalize the idea that AI companies are entitled to scrape, store, and replicate human creativity without consent. Each time a court finds a narrow fair-use defense in a context that doesn’t involve piracy or derivative outputs (because they lose on training on stolen goods like in the Anthropic and Meta cases), the labs and their shills trumpet it as proof that training itself is categorically protected. It isn’t and no court has ever ruled that it is and likely never will.

Fair Use Is Not a Safe Harbor

Fair use is a case-by-case defense to copyright infringement, not a standing permission slip. It weighs purpose, amount, transformation, and market effect — all of which vary depending on the facts. But AI companies are trying to convert that flexible doctrine into a brand new safe harbor: a default assumption that all training is fair use unless proven otherwise.  They love a safe harbor in Silicon Valley and routinely abuse them like Section 230, the DMCA and Title I of the Music Modernization Act.

That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it.  A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word “training.”

Even if courts were to recognize limited fair use for truly lawful training, that protection would never extend to datasets built from pirate websites, torrent mirrors, or unlicensed repositories like Sci-Hub, Z-Library, or Common Crawl’s scraped paywalls—more on the scummy Common Crawl another time. The DMCA’s safe harbors don’t protect platforms that knowingly host stolen goods — and neither would any hypothetical “right to train.”

Yet a safe harbor is precisely what the labs are seeking: a doctrine that would retroactively bless mass infringement like Spotify got in the Music Modernization Act and preempt accountability for the sources they used.  

And not only do they want a safe harbor — they want it for free.  No licenses, no royalties, no dataset audits, no compensation. What do they want?  FREE STUFF.  When do they want it?  NOW!  Just blanket immunity, subsidized by every artist, author, and journalist whose work they ingested without consent or payment.

The Real Motive Behind the Push

The reason AI companies need a “right to train” is simple: without it, they have no reliable legal basis for the data that powers their models and they are too cheap to pay and to careless to take the time to license. Most of their “training corpora” were built years before any licenses were contemplated — scraped from the open web, archives, and pirate libraries under the assumption that no one would notice.

This is particularly important for books.  Training on books is vital for AI models because books provide structured, high-quality language, complex reasoning, and deep cultural context. They teach models coherence, logic, and creativity that short-form internet text lacks. Without books, AI systems lose depth, nuance, and the ability to understand sustained argument, narrative, and style. 

Without books, AI labs have no business.  That’s why they steal books.  Very simple, really.

Now that creators are suing, the labs are trying to reverse-engineer legitimacy. They want to turn each court ruling that nudges fair use in their direction into a brick in the wall of a judicially-manufactured safe harbor — one that Congress never passed and rights-holders never agreed to and would never agree to.

But safe harbors are meant to protect good-faith intermediaries who act responsibly once notified of infringement. AI labs are not intermediaries; they are direct beneficiaries. Their entire business model depends on retaining the stolen data permanently in model weights that cannot be erased.  The “right to train” is not a right — it’s a rhetorical weapon to make theft sound inevitable and a demand from the richest corporations in commercial history for yet another government-sponsored subsidy of infringement by bad actors.

The Myth of the Inevitable Machine

AI’s defenders claim that training on copyrighted works is as natural as human learning. But there’s nothing natural about hoarding other people’s labor at planetary scale and calling it innovation. The truth is simpler: the “right to train” is a marketing term invented to launder unlawful data practices into respectability.

If courts and lawmakers don’t call it what it is — a manufactured, safe harbor for piracy to benefit some of the biggest free riders who ever snarfed down corporate welfare — then history will repeat itself. What Grokster tried to do with distribution, AI is trying to do with cognition: privatize the world’s creative output and claim immunity for the theft.

Artist Rights Are Innovation, Too! White House Opens AI Policy RFI and Artists Should Be Heard

The White House has opened a major Request for Information (RFI) on the future of artificial intelligence regulation — and anyone can submit a comment. That means you. This is not just another government exercise. It’s a real opportunity for creators, musicians, songwriters, and artists to make their voices heard in shaping the laws that will govern AI and its impact on culture for decades to come.

Too often, artists find out about these processes after the decisions are already made. This time, we don’t have to be left out. The comment period is open now, and you don’t need to be a lawyer or a lobbyist to participate — you just need to care about the future of your work and your rights. Remember—property rights are innovation, too, just ask Hernando de Soto (Mystery of Capital) or any honest economist.

Here are four key issues in the RFI that matter deeply to artists — and why your voice is critical on each:


1. Transparency and Provenance: Artists Deserve to Know When Their Work Is Used

One of the most important questions in the RFI asks how AI companies should document and disclose the creative works used to train their models. Right now, most platforms hide behind trade secrets and refuse to reveal what they ingested. For artists, that means you might never know if your songs, photographs, or writing were taken without permission — even if they now power billion-dollar AI products.

This RFI is a chance to demand real provenance requirements: records of what was used, when, and how. Without this transparency, artists cannot protect their rights or seek compensation. A strong public record of support for provenance could shape future rules and force platforms into accountability.


2. Derivative Works and AI Memory: Creativity Shouldn’t Be Stolen Twice

The RFI also raises a subtle but crucial issue: even if companies delete unauthorized copies of works from their training sets, the models still retain and exploit those works in their weights and “memory.” This internal use is itself a derivative work — and it should be treated as one under the law.

Artists should urge regulators to clarify that training outputs and model weights built from copyrighted material are not immune from copyright. This is essential to closing a dangerous loophole: without it, platforms can claim to “delete” your work while continuing to profit from its presence inside their AI systems.


3. Meaningful Opt-Out: Creators Must Control How Their Work Is Used

Another critical question is whether creators should have a clear, meaningful opt-out mechanism that prevents their work from being used in AI training or generation without permission. As Artist Rights Institute and many others have demonstrated, “Robots.txt” disclaimers buried in obscure places are not enough. Artists need a legally enforceable system—not another worthless DMCA-style notice and notice and notice and notice and notice and maybe takedown system that platforms must respect and that regulators can audit.

A robust opt-out system would restore agency to creators, giving them the ability to decide if, when, and how their work enters AI pipelines. It would also create pressure on companies to build legitimate licensing systems rather than relying on theft.


4. Anti-Piracy Rule: National Security Is Not a License to Steal

Finally, the RFI invites comment on how national priorities should shape AI development and it’s vital that artists speak clearly here. There must be a bright-line rule that training AI models on pirated content is never excused by national security or “public interest” arguments. This is a real thing—pirate libraries are clearly front and center in AI litigation which have largely turned into piracy cases because the AI lab “national champions” steal books and everything else.

If a private soldier stole a carton of milk from a chow hall, he’d likely lose his security clearance. Yet some AI companies have built entire models on stolen creative works and now argue that government contracts justify their conduct. That logic is backwards. A nation that excuses intellectual property theft in the name of “security” corrodes the rule of law and undermines the very innovation it claims to protect. On top of it, the truth of the case is that the man Zuckerberg is a thief, yet he is invited to dinner at the White House.

A clear anti-piracy rule would ensure that public-private partnerships in AI development follow the same legal and ethical standards we expect of every citizen — and that creators are not forced to subsidize government technology programs with uncompensated labor. Any “AI champion” who steals should lose or be denied a security clearance.


Your Voice Matters — Submit a Comment

The White House needs to hear directly from creators — not just from tech companies and trade associations. Comments from artists, songwriters, and creative professionals will help shape how regulators understand the stakes and set the boundaries.

You don’t need legal training to submit a comment. Speak from your own experience: how unauthorized use affects your work, why transparency matters, what a meaningful opt-out would look like, and why piracy can never be justified by national security.

👉 Submit your comment here before the October 27 deadline.

@DanMilmo: Top UK artists urge Starmer to protect their work on eve of Trump visit

UK artists including Paul McCartney, Kate Bush and Elton John urged Prime Minister Keir Starmer to protect creators before a UK-US tech pact tied to President Donald Trump’s visit. In a letter, they accuse Labour of blocking transparency rules that would force AI firms to disclose training data and warn proposals enabling training on copyrighted works without permission could let an artist’s life’s work be stolen. Citing human rights documents like the International Covenant on Economic, Social and Cultural Rights, the Berne convention and the European Convention on Human Rights, they frame the issue as a human-rights breach. Peer Beeban Kidron criticised US-heavy working groups. Government says no decision yet and promises a report by March. 

Read the post on The Guardian

Senator Josh @HawleyMO Throws Down on Big Tech’s Copyright Theft

 I believe Americans should have the ability to defend their human data, and their rights to that data, against the largest copyright theft in the history of the world. 

Millions of Americans have spent the past two decades speaking and engaging online. Many of you here today have online profiles and writings and creative productions that you care deeply about. And rightly so. It’s your work. It’s you.

What if I told you that AI models have already been trained on enough copyrighted works to fill the Library of Congress 22 times over? For me, that makes it very simple: We need a legal mechanism that allows Americans to freely defend those creations. I say let’s empower human beings by protecting the very human data they create. Assign property rights to specific forms of data, create legal liability for the companies who use that data and, finally, fully repeal Section 230. Open the courtroom doors. Let the people sue those who take their rights, including those who do it using AI.

Third, we must add sensible guardrails to the emergent AI economy and hold concentrated economic power to account. These giant companies have made no secret of their ambitions to radically reshape our economic life. So, we ought to require transparency and reporting each time they replace a working man with a machine.

And the government should inspect all of these frontier AI systems, so we can better understand what the tech titans plan to build and deploy. 

Ultimately, when it comes to guardrails, protecting our children should be our lodestar. You may have seen recently how Meta green-lit its own chatbots to have sensual conversations with children—yes, you heard me right. Meta’s own internal documents permitted lurid conversations that no parent would ever contemplate. And most tragically, ChatGPT recently encouraged a troubled teenager to commit suicide—even providing detailed instructions on how to do it.

We absolutely must require and enforce rigorous technical standards to bar inappropriate or harmful interactions with minors. And we should think seriously about age verification for chatbots and agents. We don’t let kids drive or drink or do a thousand other harmful things. The same standards should apply to AI.

Fourth and finally, while Congress gets its act together to do all of this, we can’t kneecap our state governments from moving first. Some of you may have seen that there was a major effort in Congress to ban states from regulating AI for 10 years—and a whole decade is an eternity when it comes to AI development and deployment. This terrible policy was nearly adopted in the reconciliation bill this summer, and it could have thrown out strong anti-porn and child online safety laws, to name a few. Think about that: conservatives out to destroy the very concept of federalism that they cherish … all in the name of Big Tech. Well, we killed it on the Senate floor. And we ought to make sure that bad idea stays dead.

We’ve faced technological disruption before—and we’ve acted to make technology serve us, the people. Powered flight changed travel forever, but you can’t land a plane on your driveway. Splitting the atom fundamentally changed our view of physics, but nobody expects to run a personal reactor in their basement. The internet completely recast communication and media, but YouTube will still take down your video if you violate a copyright. By the same token, we can—and we should—demand that AI empower Americans, not destroy their rights . . . or their jobs . . . or their lives.