“You don’t need to train on novels and pop songs to get the benefits of AI in science” @ednewtonrex


You Don’t Need to Steal Art to Cure Cancer: Why Ed Newton-Rex Is Right About AI and Copyright

Ed Newton-Rex said the quiet truth out loud: you don’t need to scrape the world’s creative works to build AI that saves lives. Or even beat the Chinese Communist Party.

It’s a myth that AI “has to” ingest novels and pop lyrics to learn language. Models acquire syntax, semantics, and pragmatics from any large, diverse corpus of natural language. That includes transcribed speech, forums, technical manuals, government documents, Wikipedia, scientific papers, and licensed conversational data. Speech systems learn from audio–text pairs, not necessarily fiction; text models learn distributional patterns wherever language appears. Of course, literary works can enrich style, but they’re not necessary for competence: instruction tuning, dialogue data, and domain corpora yield fluent models without raiding copyrighted art. In short, creative literature is optional seasoning, not the core ingredient for teaching machines to “speak.”

Google’s new cancer-therapy paper proves the point. Their model wasn’t trained on novels, lyrics, or paintings. It was trained responsibly on scientific data. And yet it achieved real, measurable progress in biomedical research. That simple fact dismantles one of Silicon Valley’s most persistent myths: that copyright is somehow an obstacle to innovation.

You don’t need to train on Joni Mitchell to discover a new gene pathway. You don’t need to ingest John Coltrane to find a drug target. AI used for science can thrive within the guardrails of copyright because science itself already has its own open-data ecosystems—peer-reviewed, licensed, and transparent.

The companies like Anthropic and Meta insisting that “fair use” covers mass ingestion of stolen creative works aren’t curing diseases; they’re training entertainment engines. They’re ripping off artists’ livelihoods to make commercial chatbots, story generators, and synthetic-voice platforms designed to compete against the very creators whose works they exploited. That’s not innovation—it’s market capture through appropriation.

They do it for reasons old as time—they do it for the money.

The ethical divide is clear:

  • AI for discovery builds on licensed scientific data.
  • AI for mimicry plunders culture to sell imitation.

We should celebrate the first and regulate the second. Upholding copyright and requiring provenance disclosures doesn’t hinder progress—it restores integrity. The same society that applauds AI in medical breakthroughs can also insist that creative industries remain human-centered and law-abiding. Civil-military fusion doesn’t imply that there’s only two ingredients in the gumbo of life.

If Google can advance cancer research without stealing art, so can everyone else and so can Google keep different rules for the entertainment side of their business or investment portfolio. The choice isn’t between curing cancer and protecting artists—it’s between honesty and opportunism. The repeated whinging of AI labs about “because China” would be a lot more believable if they used their political influence to get the CCP to release Hong Kong activist Jimmy Lai from stir. We can join Jimmy and his amazingly brave son Sebastian and say “because China”, too. #FreeJimmyLai

9/18/25: Save the Date! @ArtistRights Institute and American University Kogod School to host Artist Rights Roundtable on AI and Copyright Sept. 18 in Washington, DC

🎙️ Artist Rights Roundtable on AI and Copyright:  Coffee with Humans and the Machines            

📍 Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016 | 🗓️ September 18, 2025 | 🕗 8:00 a.m. – 12:00 noon

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

Join the Artist Rights Institute (ARI) and Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:
Dr. Moiya McTier Human Artistry Campaign
Ryan Lehnning, Assistant General Counsel, International at SoundExchange
The Chatbot
Moderator Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation, Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:
Erin McAnally, Executive Director, Songwriters of North America
Dr. Richard James Burgess, CEO A2IM
Dr. David C. Lowery, Terry College of Business, University of Georgia.

Moderator: Linda Bloss Baum, Director Business and Entertainment Program, Kogod School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch Eventbrite, this space and visit ArtistRightsInstitute.org for updates and speaker announcements.