@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

There Is No ‘Right to Train’: How AI Labs Are Trying to Manufacture a Safe Harbor for Theft

Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about “transformative fair use.” Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will.  That doesn’t mean they won’t keep trying.

A “right to train” is not found anywhere in the Copyright Act or any other law.  It’s also not found in court cases on fair-use that the AI lobby leans on. It’s a slogan and it’s spin, not a statute. What we’re watching is a coordinated effort by the major AI labs to manufacture a safe harbor through litigation — using every favorable fair-use ruling to carve out what looks like a precedent for blanket immunity.  Then they’ll get one of their shills in Congress or a state legislature to introduce legislation as though a “right to train” was there all along.

How the “Right to Train” Narrative Took Shape

The phrase first appeared in tech-industry briefs and policy papers describing model training as a kind of “machine learning fair use.” The logic goes like this: since humans can read a book and learn from it, a machine should be able to “learn” from the same book without permission.

That analogy collapses under scrutiny. First of all, humans typically bought the book they read or checked it out from a library.  Humans don’t make bit-for-bit copies of everything they read, and they don’t reproduce or monetize those copies at global scale. AI training does exactly that — storing expressive works inside model weights, then re-deploying them to generate derivative material.

But the repetitive chant of the term “right to train” serves a purpose: to normalize the idea that AI companies are entitled to scrape, store, and replicate human creativity without consent. Each time a court finds a narrow fair-use defense in a context that doesn’t involve piracy or derivative outputs (because they lose on training on stolen goods like in the Anthropic and Meta cases), the labs and their shills trumpet it as proof that training itself is categorically protected. It isn’t and no court has ever ruled that it is and likely never will.

Fair Use Is Not a Safe Harbor

Fair use is a case-by-case defense to copyright infringement, not a standing permission slip. It weighs purpose, amount, transformation, and market effect — all of which vary depending on the facts. But AI companies are trying to convert that flexible doctrine into a brand new safe harbor: a default assumption that all training is fair use unless proven otherwise.  They love a safe harbor in Silicon Valley and routinely abuse them like Section 230, the DMCA and Title I of the Music Modernization Act.

That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it.  A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word “training.”

Even if courts were to recognize limited fair use for truly lawful training, that protection would never extend to datasets built from pirate websites, torrent mirrors, or unlicensed repositories like Sci-Hub, Z-Library, or Common Crawl’s scraped paywalls—more on the scummy Common Crawl another time. The DMCA’s safe harbors don’t protect platforms that knowingly host stolen goods — and neither would any hypothetical “right to train.”

Yet a safe harbor is precisely what the labs are seeking: a doctrine that would retroactively bless mass infringement like Spotify got in the Music Modernization Act and preempt accountability for the sources they used.  

And not only do they want a safe harbor — they want it for free.  No licenses, no royalties, no dataset audits, no compensation. What do they want?  FREE STUFF.  When do they want it?  NOW!  Just blanket immunity, subsidized by every artist, author, and journalist whose work they ingested without consent or payment.

The Real Motive Behind the Push

The reason AI companies need a “right to train” is simple: without it, they have no reliable legal basis for the data that powers their models and they are too cheap to pay and to careless to take the time to license. Most of their “training corpora” were built years before any licenses were contemplated — scraped from the open web, archives, and pirate libraries under the assumption that no one would notice.

This is particularly important for books.  Training on books is vital for AI models because books provide structured, high-quality language, complex reasoning, and deep cultural context. They teach models coherence, logic, and creativity that short-form internet text lacks. Without books, AI systems lose depth, nuance, and the ability to understand sustained argument, narrative, and style. 

Without books, AI labs have no business.  That’s why they steal books.  Very simple, really.

Now that creators are suing, the labs are trying to reverse-engineer legitimacy. They want to turn each court ruling that nudges fair use in their direction into a brick in the wall of a judicially-manufactured safe harbor — one that Congress never passed and rights-holders never agreed to and would never agree to.

But safe harbors are meant to protect good-faith intermediaries who act responsibly once notified of infringement. AI labs are not intermediaries; they are direct beneficiaries. Their entire business model depends on retaining the stolen data permanently in model weights that cannot be erased.  The “right to train” is not a right — it’s a rhetorical weapon to make theft sound inevitable and a demand from the richest corporations in commercial history for yet another government-sponsored subsidy of infringement by bad actors.

The Myth of the Inevitable Machine

AI’s defenders claim that training on copyrighted works is as natural as human learning. But there’s nothing natural about hoarding other people’s labor at planetary scale and calling it innovation. The truth is simpler: the “right to train” is a marketing term invented to launder unlawful data practices into respectability.

If courts and lawmakers don’t call it what it is — a manufactured, safe harbor for piracy to benefit some of the biggest free riders who ever snarfed down corporate welfare — then history will repeat itself. What Grokster tried to do with distribution, AI is trying to do with cognition: privatize the world’s creative output and claim immunity for the theft.

“You don’t need to train on novels and pop songs to get the benefits of AI in science” @ednewtonrex


You Don’t Need to Steal Art to Cure Cancer: Why Ed Newton-Rex Is Right About AI and Copyright

Ed Newton-Rex said the quiet truth out loud: you don’t need to scrape the world’s creative works to build AI that saves lives. Or even beat the Chinese Communist Party.

It’s a myth that AI “has to” ingest novels and pop lyrics to learn language. Models acquire syntax, semantics, and pragmatics from any large, diverse corpus of natural language. That includes transcribed speech, forums, technical manuals, government documents, Wikipedia, scientific papers, and licensed conversational data. Speech systems learn from audio–text pairs, not necessarily fiction; text models learn distributional patterns wherever language appears. Of course, literary works can enrich style, but they’re not necessary for competence: instruction tuning, dialogue data, and domain corpora yield fluent models without raiding copyrighted art. In short, creative literature is optional seasoning, not the core ingredient for teaching machines to “speak.”

Google’s new cancer-therapy paper proves the point. Their model wasn’t trained on novels, lyrics, or paintings. It was trained responsibly on scientific data. And yet it achieved real, measurable progress in biomedical research. That simple fact dismantles one of Silicon Valley’s most persistent myths: that copyright is somehow an obstacle to innovation.

You don’t need to train on Joni Mitchell to discover a new gene pathway. You don’t need to ingest John Coltrane to find a drug target. AI used for science can thrive within the guardrails of copyright because science itself already has its own open-data ecosystems—peer-reviewed, licensed, and transparent.

The companies like Anthropic and Meta insisting that “fair use” covers mass ingestion of stolen creative works aren’t curing diseases; they’re training entertainment engines. They’re ripping off artists’ livelihoods to make commercial chatbots, story generators, and synthetic-voice platforms designed to compete against the very creators whose works they exploited. That’s not innovation—it’s market capture through appropriation.

They do it for reasons old as time—they do it for the money.

The ethical divide is clear:

  • AI for discovery builds on licensed scientific data.
  • AI for mimicry plunders culture to sell imitation.

We should celebrate the first and regulate the second. Upholding copyright and requiring provenance disclosures doesn’t hinder progress—it restores integrity. The same society that applauds AI in medical breakthroughs can also insist that creative industries remain human-centered and law-abiding. Civil-military fusion doesn’t imply that there’s only two ingredients in the gumbo of life.

If Google can advance cancer research without stealing art, so can everyone else and so can Google keep different rules for the entertainment side of their business or investment portfolio. The choice isn’t between curing cancer and protecting artists—it’s between honesty and opportunism. The repeated whinging of AI labs about “because China” would be a lot more believable if they used their political influence to get the CCP to release Hong Kong activist Jimmy Lai from stir. We can join Jimmy and his amazingly brave son Sebastian and say “because China”, too. #FreeJimmyLai

Artist Rights Are Innovation, Too! White House Opens AI Policy RFI and Artists Should Be Heard

The White House has opened a major Request for Information (RFI) on the future of artificial intelligence regulation — and anyone can submit a comment. That means you. This is not just another government exercise. It’s a real opportunity for creators, musicians, songwriters, and artists to make their voices heard in shaping the laws that will govern AI and its impact on culture for decades to come.

Too often, artists find out about these processes after the decisions are already made. This time, we don’t have to be left out. The comment period is open now, and you don’t need to be a lawyer or a lobbyist to participate — you just need to care about the future of your work and your rights. Remember—property rights are innovation, too, just ask Hernando de Soto (Mystery of Capital) or any honest economist.

Here are four key issues in the RFI that matter deeply to artists — and why your voice is critical on each:


1. Transparency and Provenance: Artists Deserve to Know When Their Work Is Used

One of the most important questions in the RFI asks how AI companies should document and disclose the creative works used to train their models. Right now, most platforms hide behind trade secrets and refuse to reveal what they ingested. For artists, that means you might never know if your songs, photographs, or writing were taken without permission — even if they now power billion-dollar AI products.

This RFI is a chance to demand real provenance requirements: records of what was used, when, and how. Without this transparency, artists cannot protect their rights or seek compensation. A strong public record of support for provenance could shape future rules and force platforms into accountability.


2. Derivative Works and AI Memory: Creativity Shouldn’t Be Stolen Twice

The RFI also raises a subtle but crucial issue: even if companies delete unauthorized copies of works from their training sets, the models still retain and exploit those works in their weights and “memory.” This internal use is itself a derivative work — and it should be treated as one under the law.

Artists should urge regulators to clarify that training outputs and model weights built from copyrighted material are not immune from copyright. This is essential to closing a dangerous loophole: without it, platforms can claim to “delete” your work while continuing to profit from its presence inside their AI systems.


3. Meaningful Opt-Out: Creators Must Control How Their Work Is Used

Another critical question is whether creators should have a clear, meaningful opt-out mechanism that prevents their work from being used in AI training or generation without permission. As Artist Rights Institute and many others have demonstrated, “Robots.txt” disclaimers buried in obscure places are not enough. Artists need a legally enforceable system—not another worthless DMCA-style notice and notice and notice and notice and notice and maybe takedown system that platforms must respect and that regulators can audit.

A robust opt-out system would restore agency to creators, giving them the ability to decide if, when, and how their work enters AI pipelines. It would also create pressure on companies to build legitimate licensing systems rather than relying on theft.


4. Anti-Piracy Rule: National Security Is Not a License to Steal

Finally, the RFI invites comment on how national priorities should shape AI development and it’s vital that artists speak clearly here. There must be a bright-line rule that training AI models on pirated content is never excused by national security or “public interest” arguments. This is a real thing—pirate libraries are clearly front and center in AI litigation which have largely turned into piracy cases because the AI lab “national champions” steal books and everything else.

If a private soldier stole a carton of milk from a chow hall, he’d likely lose his security clearance. Yet some AI companies have built entire models on stolen creative works and now argue that government contracts justify their conduct. That logic is backwards. A nation that excuses intellectual property theft in the name of “security” corrodes the rule of law and undermines the very innovation it claims to protect. On top of it, the truth of the case is that the man Zuckerberg is a thief, yet he is invited to dinner at the White House.

A clear anti-piracy rule would ensure that public-private partnerships in AI development follow the same legal and ethical standards we expect of every citizen — and that creators are not forced to subsidize government technology programs with uncompensated labor. Any “AI champion” who steals should lose or be denied a security clearance.


Your Voice Matters — Submit a Comment

The White House needs to hear directly from creators — not just from tech companies and trade associations. Comments from artists, songwriters, and creative professionals will help shape how regulators understand the stakes and set the boundaries.

You don’t need legal training to submit a comment. Speak from your own experience: how unauthorized use affects your work, why transparency matters, what a meaningful opt-out would look like, and why piracy can never be justified by national security.

👉 Submit your comment here before the October 27 deadline.

@johnpgatta Interviews @davidclowery in Jambands

David Lowery sits down with John Patrick Gatta at Jambands for a wide-ranging conversation that threads 40 years of Camper Van Beethoven and Cracker through the stories behind David’s 3 disc release Fathers, Sons and Brothers and how artists survive the modern music economy. Songwriter rights, road-tested bands, or why records still matter. Read it here.

David Lowery toured this year with a mix of shows celebrating the 40th anniversary of Camper Van Beethoven’s debut, Telephone Free Landslide Victory, duo and band gigs with Cracker, as well as solo dates promoting his recently-released Fathers, Sons and Brothers.

Fathers, the 28-track musical memoir of Lowery’s personal life explored childhood memories, drugs at Disneyland and broken relationships. Of course, it tackles his lengthy career as an indie and major label artist who catalog highlights include the alt-rock classic “Take the Skinheads Bowling” and commercial breakthrough of “Teen Angst” and “Low.” The album works as a selection of songs that encapsulate much of his musical history— folk, country and rock—as well as an illuminating narrative that relates the ups, downs, tenacity, reflection and resolve of more than four decades as a musician.

9/18/25: Save the Date! @ArtistRights Institute and American University Kogod School to host Artist Rights Roundtable on AI and Copyright Sept. 18 in Washington, DC

🎙️ Artist Rights Roundtable on AI and Copyright:  Coffee with Humans and the Machines            

📍 Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016 | 🗓️ September 18, 2025 | 🕗 8:00 a.m. – 12:00 noon

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

Join the Artist Rights Institute (ARI) and Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:
Dr. Moiya McTier Human Artistry Campaign
Ryan Lehnning, Assistant General Counsel, International at SoundExchange
The Chatbot
Moderator Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation, Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:
Erin McAnally, Executive Director, Songwriters of North America
Dr. Richard James Burgess, CEO A2IM
Dr. David C. Lowery, Terry College of Business, University of Georgia.

Moderator: Linda Bloss Baum, Director Business and Entertainment Program, Kogod School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch Eventbrite, this space and visit ArtistRightsInstitute.org for updates and speaker announcements.

@ArtistRights Newsletter 8/18/25: From Jimmy Lai’s show trial in Hong Kong to the redesignation fight over the Mechanical Licensing Collective, this week’s stories spotlight artist rights, ticketing reform, AI scraping, and SoundExchange’s battle with SiriusXM.

Save the Date! September 18 Artist Rights Roundtable in Washington produced by Artist Rights Institute/American University Kogod Business & Entertainment Program. Details at this link!

Artist Rights

JIMMY LAI’S ORDEAL: A SHOW TRIAL THAT SHOULD SHAME THE WORLD (MusicTechPolicy/Chris Castle)

Redesignation of the Mechanical Licensing Collective

Ex Parte Review of the MLC by the Digital Licensee Coordinator

Ticketing

StubHub Updates IPO Filing Showing Growing Losses Despite Revenue Gain (MusicBusinessWorldwide/Mandy Dalugdug)

Lewis Capaldi Concert Becomes Latest Ground Zero for Ticket Scalpers (Digital Music News/Ashley King)

Who’s Really Fighting for Fans? Chris Castle’s Comment in the DOJ/FTC Ticketing Consultation (Artist Rights Watch)

Artificial Intelligence

MUSIC PUBLISHERS ALLEGE ANTHROPIC USED BITTORRENT TO PIRATE COPYRIGHTED LYRICS(MusicBusinessWorldwide/Daniel Tencer)

AI Weather Image Piracy Puts Storm Chasers, All Americans at Risk (Washington Times/Brandon Clemen)

TikTok After Xi’s Qiushi Article: Why China’s Security Laws Are the Whole Ballgame (MusicTechSolutions/Chris Castle)

Reddit Will Block the Internet Archive (to stop AI scraping) (The Verge/Jay Peters) 

SHILLING LIKE IT’S 1999: ARS, ANTHROPIC, AND THE INTERNET OF OTHER PEOPLE’S THINGS(MusicTechPolicy/Chris Castle)

SoundExchange v. SiriusXM

SOUNDEXCHANGE SLAMS JUDGE’S RULING IN SIRIUSXM CASE AS ‘ENTIRELY WRONG ON THE LAW’(MusicBusinessWorldwide/Mandy Dalugdug)

PINKERTONS REDUX: ANTI-LABOR NEW YORK COURT ATTEMPTS TO CUT OFF LITIGATION BY SOUNDEXCHANGE AGAINST SIRIUS/PANDORA (MusicTechPolicy/Chris Castle)

@RickBeato on AI Artists

Is it at thing or is it disco? Our fave Rick Beato has a cautionary tale in this must watch video: AI can mimic but not truly create art. As generative tools get more prevalent, he urges thoughtful curation, artist-centered policies, and an emphasis on emotionally rich, human-driven creativity–also known as creativity. h/t Your Morning Coffee our favorite podcast.

United for Artists’ Rights: Amicus Briefs Filed in Vetter v. Resnik Support Global Copyright Termination for Songwriters and Authors: Brief by Music Artists Coalition, Black Music Action Coalition, Artists Rights Alliance, Songwriters Of North America, and Screen Actors Guild-American Federation Of Television And Radio Artists

In Vetter v. Resnik, songwriter Cyril Vetter won his trial case in Baton Rouge allowing him to recover worldwide rights in his song “Double Shot of My Baby’s Love” after serving his 35 year termination notice on his former publisher, Resnik Music Group. The publisher appealed. The Fifth Circuit Court of Appeals will hear the case and currently is weighing whether U.S. copyright termination rights include “foreign” territories—a question that strikes at the heart of artists’ ability to reclaim their work worldwide (whatever “foreign” means).

Cyril’s attorney Tim Kappel explains the case if you need an explainer:

An astonishing number of friend of the court briefs were filed by many songwriter groups. We’re going to post them all and today’s brief is by Music Artists Coalition, Black Music Action Coalition, Artists Rights Alliance, Songwriters Of North America, And Screen Actors Guild-American Federation Of Television And Radio Artists–that’s right, the SAG-AFTRA union is with us.

We believe the answer must be yes. Congress gave creators and their heirs the right a “second bite at the apple” to regain control of their work after decades, and that promise means little if global rights are excluded. The outcome of this case could either reaffirm that promise—or open the door for multinational publishers to sidestep it entirely.

That’s why we’re sharing friend of the court briefs from across the creative communities. Each one brings a different perspective—but all defend the principle that artists deserve a real, global right to take back what’s theirs, because as Chris said, Congress did not give authors a second bite at half the apple.

Read the latest amicus brief below, watch this space for more.

Hey Budweiser, You Give Beer a Bad Name

In a world where zero royalties becomes a brag, and one second of music is one second too far.

Let me set the stage: Cannes Lions is the annual eurotrash…to coin a phrase…circular self-congratulatory hype fest at which the biggest brands and ad agencies in the world if not the Solar System spend unreal amounts of money telling each other how wonderful they are. Kind of like HITS Magazine goes to Cannes but with a real budget. And of course the world’s biggest ad platform–guess who–has a major presence there among the bling and yachts of the elites tied up in Yachtville by the Sea. And of course they give each other prizes, and long-time readers know how much we love a good prize, Nyan Cat wise.

Enter the King of Swill, the mind-numbingly stupid Budweiser marketing department. Or as they say in Cannes, Le roi de la bibine.

Credit where it’s due: British Bud-hater and our friend Chris Cooke at CMU flagged this jaw-dropper from Cannes Lions, where Budweiser took home the Grand Prix for its “One‑Second Ad” campaign—a series of ultra-short TikTok clips that featured the one second of hooks from iconic songs. The gimmick? Tease the audience just long enough to trigger nostalgia, then let the internet do the rest. The beer is offensive enough to any right-thinking Englishman, but the theft? Ooh la la.

Cannes Clown

Budweiser’s award-winning brag? “Zero ads were skipped. $0 spent on music right$.” Yes, that’s correct–“right$”.

That quote should hang in a museum of creative disinformation.

There’s an old copyright myth known as the “7‑second rule”—the idea that using a short snippet of a song (usually under 7 seconds) doesn’t require a license. It’s pure urban legend. No court has ever upheld such a rule, but it sticks around because music users desperately want it to be true. Budweiser didn’t just flirt with the myth—it took the myth on a date to Short Attention Span Theater, built an ad campaign around it, and walked away with the biggest prize in advertising to the cheers of Googlers everywhere.

When Theft from artists Becomes a Business Model–again

But maybe this kind of stunt shouldn’t come as a surprise. When the richest corporations in commercial history are openly scraping, mimicking, and monetizing millions of copyrighted works to train AI models—without permission and without payment—and so far getting away with it, it sends a signal. A signal that says: “This isn’t theft, it’s innovation.” Yeah, that’s the ticket. Give them a prize.

So of course Budweiser’s corporate brethren start thinking: “Me too.

As Austin songwriter Guy Forsyth wrote in Long Long Time“Americans are freedom-loving people, and nothing says freedom like getting away with it.” That lyric, in this context, resonates like a manifesto for scumbags.

The Immorality of Virality

For artists and the musicians and vocalists who created the value that Budweiser is extracting, the campaign’s success is a masterclass in bad precedent. It’s one thing to misunderstand copyright; it’s another to market that misunderstanding as a feature. When global brands publicly celebrate not paying for music–in Cannes, of all places—the very tone-deaf foundation of their ad’s emotional resonance sends a corrosive signal to the entire creative economy. And, frankly, to fans.

Oops!… I Did It Again, bragged Budweiser, proudly skipping royalties like it’s Free Fallin’, hoping no one notices they’re just Smooth Criminals playing Cheap Thrills with other people’s work. It’s not Without Me—it’s without paying anyone—because apparently Money for Nothing is still the vibe, and The Sound of Silence is what they expect from artists they’ve ghosted.

Because make no mistake: even one second of a recording can be legally actionable particularly when the intentional infringing conspiracy gets a freaking award for doing it. That’s not just law—it’s basic respect, which is kind of the same thing. Which makes Budweiser’s campaign less of a legal grey area and more of a cultural red flag with a bunch of zeros. Meaning the ultimate jury award from a real jury, not a Cannes jury.

This is the immorality of virality: weaponizing cultural shorthand to score branding points, while erasing the very artists who make those moments recognizable. When the applause dies down in Yachtville, what’s left is a case study in how to win by stealing — not creating.