Phonorecords V and the “39 Steps” Problem: Time for the CRB to Fix Streaming Mechanicals

Everybody knows that the boat is leaking, everybody knows that the captain lied….
Everybody Knows by Leonard Cohen

We are now well into the next Phonorecords proceeding at the Copyright Royalty Board (CRB) where the government sets mechanical royalty rates for songwriters. Readers may remember that the last rate-setting was Phonorecords IV where Trichordist helped spread the word about the attempted end run around songwriters to freeze physical rates (vinyl & downloads) at 9.1¢ for another five years but instead resulted in an increase to 12¢ plus a cost of living adjustment which has now increased to 13.1¢. (In a demonstration of humility and lack of pomposity, these proceedings are given Roman numerals like the Super Bowl, so the current example of gladiatorial combat is titled Phonorecords V.)

Inside the years-long litigation-like proceeding, there is an issue hiding in plain sight inside the existing and ancient streaming mechanical royalty rate structure that we fondly call “the 39 steps” in honor of John Buchan, Alfred Hitchcock and Richard Hannay. Despite the blood lust for complexity from the ancien régime that clings to its one sided royalty pool, there is one part of this unfair business practice that the CRB can and should address this time around.

Start with the basics. The streaming mechanical formula—the so-called “39 steps”—is built on a simple premise: we are calculating royalties for the use of musical works protected by the Copyright Act. The inputs and deductions in that formula are not abstract accounting categories. They are supposed to reflect real payments for real statutory rights.

That premise is now under pressure because of…wait for it…artificial intelligence and the AI slop that is flooding the market.

The rise of generative AI has introduced a new category of output that does not fit neatly within the Copyright Act. The U.S. Copyright Office has made clear that works generated entirely by AI are not copyrightable, and that protection exists only to the extent of meaningful human authorship in a proportion yet to be determined. (Courts have moved in the same direction, and the Supreme Court’s denial of cert in Thaler v. Perlmutter leaves that framework intact.)

Yet the streaming mechanical formula has no explicit mechanism to deal with AI slop. That creates a risk on two fronts.

We have to consider the royalty pool itself. The compulsory mechanical license applies when the exclusive rights of a copyright owner in a musical work are implicated. If a so-called “AI track” is not a protected musical work, then there is a serious question whether it belongs in the section 115 system at all. Treating non-copyrightable output as if it were a statutory musical work risks diluting the pool for actual rightsholders.

And then, of course, we have the Step 2 deduction for performance royalties. The regulation allows services to subtract payments for the public performance of musical works before calculating the payable pool. But what happens if a service characterizes payments to a platform like AIMPRO as “performance royalties”? If those payments are not, in fact, for the public performance of a copyrightable musical work, they should not reduce the pool. Otherwise, the 39 steps formula starts to leak money, and eventually leak in a big way.

Not only that, but if the U.S. Copyright Office ultimately articulates a workable “human authorship” framework for AI-assisted works during the Phonorecords V rate period, the downstream impact on the Copyright Act section 115 system could be profound: for the first time, the “39 steps” calculation may have to accommodate fractional copyrightability within a single work. Instead of treating a musical work as an either/or, services and the MLC could be forced to parse which portions of a track are attributable to human authorship and therefore eligible for royalties, and which are not. That would introduce a new layer of allocation on top of an already complex formula—effectively embedding micro-level authorship determinations into macro-level royalty calculations—and raising the administrative, evidentiary, and dispute-resolution burdens across the entire system.

The key point is that the CRB does not need to resolve all questions of AI copyrightability to act here for purposes of the 39 Steps. It can simply clarify what is already in the statute and the regulation: The formula applies only to payments that correspond to rights in nondramatic musical works, and deductions are limited to payments that genuinely compensate the public performance of such works. That is not a policy innovation outside the scope of the CRB’s mandate from Congress. It is a classification rule.

If there is doubt about whether a category of material such as purely generative AI output qualifies as a “musical work” for these purposes, that is a question the CRB can refer to the Register of Copyrights in a pinch. But the CRB should not leave the door open for the mechanical royalty pool to be diluted by payments for things that fall outside the Copyright Act altogether. If you get a paycheck every week this may not be that important to you, but if you live off of royalties it damn sure is.

This may also be the moment to ask a more fundamental question: whether the industry should abandon the “39 steps” construct altogether. Whatever its historical justification—particularly in Phonorecords I back in 2009, where publishers were trying to shield early services like MusicNet from crushing retroactive exposure—the current formula has outlived its usefulness. Today, it functions less as a fair pricing mechanism and more as a constraint, allowing services to use their complementary oligopoly market power to effectively cap mechanical royalties by anchoring them to a royalty pool determined in part based on what labels get paid. The result is a structurally odd feedback loop in which sound recording deals influence the value of adjacent musical works. A cleaner alternative would be a flat, escalating penny-rate framework, like what the Judges adopted for both physical and downloads as well as webcasting royalties—simpler, more transparent, and far less susceptible to strategic manipulation.

We have been here before. The history of section 115 is, in many ways, the history of closing gaps between statutory language and market behavior.

Phonorecords V presents another such moment.

The CRB should take it.

@hypebot: Jay Gilbert, Ryan Vaughn, & Benji Stein Share Expert Tips for Artist Growth in 2026

Check out a great discussion from our friends at Hypebot: The latest panel from MusicPro ’26 offers a useful snapshot of where “artist growth” advice stands heading into 2026—and where it may still be missing the mark.

In this Hypebot discussion, Jay Gilbert, Ryan Vaughn, and Benji Stein walk through the evolving toolkit for independent artists: data, audience development, and the growing skepticism around social media metrics. The throughline is clear—streams and followers don’t build careers; real fans do. The panel repeatedly returns to the importance of identifying and nurturing “actionable” fans over vanity metrics. 

But the more interesting takeaway may be what sits beneath that advice. As platforms flood artists with data, the real advantage increasingly lies in owning the relationship through email lists, direct engagement, and signals that actually convert into tickets, merch, and sustained attention. (And in our experience, owning the relationship is the one thing Spotify doesn’t want you to do.)

The result is a subtle but important shift: away from platform-defined success, and toward artist-controlled audience infrastructure.

The question, of course, is whether the current system actually rewards that shift—or quietly undermines it.

Read the post on Hypebot

Chris Cooke: SoundExchange boss says all EU countries must change copyright rules so European radio royalties flow to American performers #IRespectMusic

Ireland Leads the Way: A Step Toward Fair Radio Royalties for American Artists in Europe

For years, American artists have been told that the global royalty system is just “complicated”—a patchwork of treaties, local rules, and reciprocal deals that somehow always seem to leave U.S. performers on the short end of the stick. But as this new report highlighted by CMU makes clear, what’s really at issue isn’t complexity. It’s discrimination dressed up as policy.

At the center of the debate is a simple principle: national treatment—the idea that countries should pay foreign creators the same royalties they pay their own. That principle is already embedded in international law and reinforced by recent European court decisions. And yet, across much of Europe, American performers still don’t get paid when their recordings are played on terrestrial radio, even while European artists are paid at home and abroad.

Now, SoundExchange is turning up the pressure, arguing that every EU member state must finally align its laws with that principle and unlock hundreds of millions in unpaid royalties.

This is exactly what our friend Blake Morgan and the #IRespectMusic campaign have been fighting for over the past decade—fair pay for performers wherever their music is used. And it’s another reminder that we join with the MusicFirst Coalition in demanding that the U.S. should lead by example: passing the American Music Fairness Act would strengthen hand of America’s creators globally and help ensure U.S. artists are paid both at home and abroad.

This isn’t just a technical copyright dispute. It’s a global trade and fairness issue—one that goes directly to how countries value music as an export, and whether creators are treated as partners in that economy or just inputs to be exploited.

Read Chris Cooke’s excellent explainer in Complete Music Update

The boss of US collecting society SoundExchange has welcomed a change to Irish copyright law which means radio royalties collected in Ireland can now flow to American performers when their music gets airplay in the country. Even though no radio royalties flow in the other direction to European performers, because radio stations in the US don’t have to pay any money to any artists or labels. 

That change to Irish law was the result of a ruling in the European Union courts which, SoundExchange CEO Michael Huppe insists, also obligates other EU countries to implement similar changes, so that more radio royalties flow to the US. “Implementation isn’t optional – it’s a legal obligation”, Huppe says, adding, “creators everywhere deserve to be paid when their music is used, no matter their nationality”. 

California Takes a Step Toward Ending Speculative Ticketing

One of the most frustrating tricks in the ticket resale business is something called speculative ticketing. That’s when someone lists a ticket for sale before they actually have the ticket. We’ve discussed the problem many times, but Kid Rock brought it to a head recently during a hearing on Capitol Hill.

If you haven’t run across spec ticking before, here it is: The seller is essentially betting they will be able to obtain the ticket later. If they succeed, they deliver the ticket to the buyer. If they don’t, the buyer often ends up with a refund—or a replacement ticket of uncertain quality—instead of the seat they thought they purchased.

For fans and artists, the bigger problem is what speculative listings do to the market before the onsale even begins.

When fans check resale marketplaces and see hundreds of tickets already listed—often at inflated prices—it creates the impression that tickets are already scarce or sold out. That perception alone can push fans to panic-buy at higher prices, even when the actual ticket inventory hasn’t even been released yet.

In other words, speculative listings can make the market look hotter and tighter than it really is.

Ironically, most of the major resale platforms already say this practice is prohibited on their service. Their terms of service typically ban selling tickets that the seller does not actually possess.

Yet those same marketplaces often display large numbers of listings that appear to be exactly that: tickets offered for sale before the seller could reasonably have them in hand.

California is now attempting to address this problem directly. A new proposal would make it clear that selling tickets you do not possess—or do not have the legal right to sell—is a deceptive practice under consumer protection law. It would also allow state and local authorities to enforce those rules, rather than leaving fans to fight the battle on their own.

That proposal is California Assembly Bill 1349 (AB 1349).

AB 1349 aims to close the gap between what resale platforms claim to prohibit and what actually happens in the marketplace. The basic principle is simple: if a ticket is listed for sale, it should be a real ticket controlled by the seller, not a speculative promise that may or may not be fulfilled later.

The bill will not fix every problem in the ticketing ecosystem. But it represents an important step toward restoring a basic level of honesty to the resale market. After all, if the platforms themselves say you shouldn’t sell a ticket you don’t have, putting that rule into law should not be controversial.

For artists and fans alike, the idea behind AB 1349 comes down to something pretty straightforward:

You shouldn’t be able to sell a ticket you don’t actually own.

Synthetic Emotion from The Music Department: Suno’s Unsettling Ad Campaign and the Return of Orwell’s Machine-Made Culture from 1984

In George Orwell’s 1984, the “versificator” was a machine designed to produce poetry, songs, and sentimental verse synthetically, without human thought or feeling. Its purpose was not artistic expression but industrial-scale cultural production—filling the air with endless, disposable content to occupy attention and shape perception. Nearly a century later, the comparison to modern generative music systems such as Suno is difficult to ignore. While the technologies differ dramatically, the underlying question is strikingly similar: what happens when music is produced by machines at scale rather than by human experience?

Orwell’s versificator was built for scale, not meaning (reminding you of anyone?). It generated formulaic songs for the masses, optimized for emotional familiarity rather than originality. Suno, by contrast, uses sophisticated machine learning trained on vast corpora of human-created music to generate complete recordings on demand that would be the envy of Big Brother’s Music Department. Suno can reportedly generate millions of tracks per day, a level of output impossible in any human-centered musical economy. When music becomes infinitely reproducible, the limiting factor shifts from creation to distribution and attention—precisely the dynamic Orwell imagined.

Nothing captures the versificator analogy more vividly than Suno’s own dystopian-style “first kiss” advertisingcampaign. In one widely circulated spot, the product is promoted through a stylized, synthetic emotional narrative that emphasizes instant, machine-generated musical cliche creation untethered from human musicians, vocalists, or composers. The message is not about artistic struggle, collaboration, or lived expression—it is about mediocre frictionless production. The ad unintentionally echoes Orwell’s warning: when culture can be manufactured instantly, expression becomes simulation. And on top of it, those ads are just downright creepy.

The versificator also blurred authorship. In 1984, no individual poet existed behind the machine’s output; creativity was subsumed into a system. Suno raises a comparable question. If a system trained on thousands or millions of human performances produces a new track, where does authorship reside? With the user who typed a prompt? With the engineers who built the model? With the countless musicians whose expressive choices shaped the training data? Or nowhere at all? This diffusion of authorship challenges long-standing cultural and legal assumptions about what it means to “create” music.

Another parallel lies in standardization. The versificator produced content that was emotionally predictable—pleasant, familiar, subservient and safe. Generative music systems often display a similar gravitational pull toward stylistic averages embedded in their training data that has been averaged into pablum. The result can be competent, even polished output that nevertheless lacks the unpredictability, risk, and individual voice associated with human artistry. Orwell’s concern was not that machine-generated culture would be bad, but that it would be flattened—replacing lived expression with algorithmic imitation. Substitutional, not substantial.

There is also a structural similarity in scale and economics. The versificator’s value to The Party lay in its ability to replace human labor in cultural production and to force the creation of projects that humans would find too creepy. Suno and similar systems raise analogous questions for modern musicians, particularly session players and composers whose work historically formed the backbone of recorded music. When a single system can generate instrumental tracks, arrangements, and stylistic variations instantly, the economic pressure on human contributors becomes obvious. Orwell imagined machines replacing poets; today the substitution pressure may fall first on instrumental performance, arrangement, sound designer, and production roles.

Yet the comparison has limits, and those limits matter. The versificator was a tool of centralized control in a dystopian state, designed to narrow human thought. Suno operates in a pluralistic technological environment where many artists themselves experiment with AI as a creative instrument. Unlike Orwell’s machine, generative music systems can be used collaboratively, interactively, and sometimes in ways that expand rather than suppress creative exploration. The technology is not inherently dystopian; its impact depends on how institutions, markets, and creators choose to shape it.

A deeper difference lies in intention. Orwell’s versificator was never meant to create art; it was meant to simulate it. Modern generative music systems are often framed as tools that can assist, augment, or inspire human creativity. Some artists use AI to prototype ideas, explore unfamiliar styles, or generate textures that would be difficult to produce otherwise. In these contexts, the machine functions less like a replacement and more like a new instrument—one whose cultural role is still evolving.

Still, Orwell’s versificator is highly relevant to understanding Suno’s corporate direction. When cultural production becomes industrialized, quantity can overwhelm meaning. The risk is not merely that machine-generated music exists, but that its scale reshapes attention, value, and recognition. If millions of synthetic tracks flood listening environments as is happening with some large DSPs, the signal of individual human expression may become harder to perceive—even if human creativity continues to exist beneath the surface.

The comparison between Suno and the versificator symbolizes the moment when technology challenges the boundaries of authorship, creativity, and cultural labor. Orwell warned of a world where machines produced endless culture without human voice. Today’s question is subtler: can society integrate generative systems in ways that preserve the distinctiveness of human expression rather than dissolving it into algorithmic slop?

The answer will not come from technology alone. It will depend on choices—legal, cultural, and economic—about how machine-generated music is labeled, valued, and integrated into the broader creative ecosystem. Orwell imagined a future where the machine replaced the poet. The task now is to ensure that, even in an age of generative AI, the humans remains audible.

Stealing Isn’t Innovation!

Don’t let the so-called “AI czar” sell you the idea that changing the law to legalize taking artists’ work without consent is innovation. It isn’t.

Innovation creates new value. The AI boondoggle takes existing value from creators and communities and hands it to a small number of tech companies—without permission, without payment, and without accountability but with a nuclear reactor next to your house.

Artists aren’t raw material. They’re rights-holders under U.S. law. Rewriting those rights to subsidize AI business models isn’t progress—it’s a policy choice to reward theft at scale.

AI can thrive without gutting creative rights. But that requires consent, licensing, and fair compensation—not retroactive immunity dressed up as innovation.

Stealing isn’t innovation. It’s just stealing, with a press strategy.

Find out more at Stealing Isn’t Innovation and @human_artistry

Meet the New AI Boss, Worse Than the Old Internet Boss

Congress is considering several legislative packages to regulate AI. AI is a system that was launched globally with no safety standards, no threat modeling, and no real oversight. A system that externalized risk onto the public, created enormous security vulnerabilities, and then acted surprised when criminals, hostile states, and bad actors exploited it.

After the damage was done, the same companies that built it told governments not to regulate—because regulation would “stifle innovation.” Instead, they sold us cybersecurity products, compliance frameworks, and risk-management services to fix the problems they created.

Yes, artificial intelligence is a problem. Wait…Oh, no sorry. That’s not AI.

That’s was Internet. And it made the tech bros the richest ruling class in history.

And that’s why some of us are just a little skeptical when the same tech bros are now telling us: “Trust us, this time will be different.” AI will be different, that’s for sure. They’ll get even richer and they’ll rip us off even more this time. Not to mention building small nuclear reactors on government land that we paid for, monopolizing electrical grids that we paid for, and expecting us to fill the landscape with massive power lines that we will pay for.

The topper is that these libertines want no responsibility for anything, and they want to seize control of the levers of government to stop any accountability. But there are some in Congress who are serious about not getting fooled again.

Senator Marsha Blackburn released a summary of legislation she is sponsoring that gives us some cause for hope (read it here courtesy of our friends at the Copyright Alliance). Because her bill might be effective, that means Silicon Valley shills will be all over it to try to water it down and, if at all possible, destroy it. That attack of the shills has already started with Silicon Valley’s AI Viceroy in the Trump White House, a guy you may never have heard of named David Sacks. Know that name. Beware that name.

Senator Blackburn’s bill will do a lot of good things, including for protecting copyright. But the first substantive section of Senator Blackburn’s summary is a game changer. She would establish an obligation on AI platforms to be responsible for known or predictable harm that can befall users of AI products. This is sometimes called a “duty of care.”

Her summary states:

Place a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users. Additionally, this section requires:

• AI platforms to conduct regular risk assessments of how algorithmic systems, engagement mechanics, and data practices contribute to psychological, physical, financial, and exploitative harms.

• The Federal Trade Commission (FTC) to promulgate rules establishing minimum reasonable safeguards.

At its core, Senator Blackburn’s AI bill tries to force tech companies to play by rules that most other industries have followed for decades: if you design a product that predictably harms people, you have a responsibility to fix it.

That idea is called “products liability.” Simply put, it means companies can’t sell dangerous products and then shrug it off when people get hurt. Sounds logical, right? Sounds like what you would expect would happen if you did the bad thing? Car makers have to worry about the famous exploding gas tanks. Toy manufacturers have to worry about choking hazards. Drug companies have to test side effects. Tobacco companies….well, you know the rest. The law doesn’t demand perfection—but it does demand reasonable care and imposes a “duty of care” on companies that put dangerous products into the public.

Blackburn’s bill would apply that same logic to AI platforms. Yes, the special people would have to follow the same rules as everyone else with no safe harbors.

Instead of treating AI systems as abstract “speech” or neutral tools, the bill treats them as what they are: products with design choices. Those choices that can foreseeably cause psychological harm, financial scams, physical danger, or exploitation. Recommendation algorithms, engagement mechanics, and data practices aren’t accidents. They’re engineered. At tremendous expense. One thing you can be sure of is that if Google’s algorithms behave a certain way, it’s not because the engineers ran out of development money. The same is true of ChatGPT, Grok, etc. On a certain level of reality, this is very likely not guess work or predictability. It’s “known” rather than “should have known.” These people know exactly what their algorithms do. And they do it for the money.

The bill would impose that duty of care on AI developers and platform operators. A duty of care is a basic legal obligation to act reasonably to prevent foreseeable harm. “Foreseeable” doesn’t mean you can predict the exact victim or moment—it means you can anticipate the type of harm that flows to users you target from how the system is built.

To make that duty real, the bill would require companies to conduct regular risk assessments and make them public. These aren’t PR exercises. They would have to evaluate how their algorithms, engagement loops, and data use contribute to harms like addiction, manipulation, fraud, harassment, and exploitation.

They do this already, believe it. What’s different is that they don’t make it public, anymore than Ford made public the internal research that the Pinto’s gas tank was likely to explode. In other words, platforms would have to look honestly at what their systems actually do in the world—not just what they claim to do.

The bill also directs the Federal Trade Commission (FTC) to write rules establishing minimum reasonable safeguards. That’s important because it turns a vague obligation (“be responsible”) into enforceable standards (“here’s what you must do at a minimum”). Think of it as seatbelts and crash tests for AI systems.

So why do tech companies object? Because many of them argue that their algorithms are protected by the First Amendment—that regulating how recommendations work is regulating speech. Yes, that is a load of crap. It’s not just you, it really is BS.

Imagine Ford arguing that an exploding gas tank was “expressive conduct”—that drivers chose the Pinto to make a statement, and therefore safety regulation would violate Ford’s free speech rights. No court would take that seriously. A gas tank is not an opinion. It’s an engineered component with known risks and risks that were known to the manufacturer.

AI platforms are the same. When harm flows from design decisions—how content is ranked, how users are nudged, how systems optimize for engagement—that’s not speech. That’s product design. You can measure it, test it, audit it, which they do and make it safer which they don’t.

This part of Senator Blackburn’s bill matters because platform design shapes culture, careers, and livelihoods. Algorithms decide what gets seen, what gets buried, and what gets exploited. Blackburn’s bill doesn’t solve every problem, but it takes an important step: it says tech companies can’t hide dangerous products behind free-speech rhetoric anymore.

If you build it, and it predictably hurts people, you’re responsible for fixing it. That’s not censorship. It’s accountability. And people like Marc Andreessen, Sam Altman, Elon Musk and David Sacks will hate it.

Trump’s Historic Kowtow to Special Interests: Why Trump’s AI Executive Order Is a Threat to Musicians, States, and Democracy

There’s a new dance in Washington—it’s called the KowTow

Most musicians don’t spend their days thinking about executive orders. But if you care about your rights, your recordings, your royalties, or your community, or even the environment, you need to understand the Trump Administration’s new executive order on artificial intelligence. The order—presented as “Ensuring a National Policy Framework for AI”—is not a national standard at all. It is a blueprint for stripping states of their power, protecting Big Tech from accountability, and centralizing AI authority in the hands of unelected political operatives and venture capitalists. In other words, it’s business as usual for the special interests led by an unelected bureaucrat, Silicon Valley Viceroy and billionaire investor David Sacks who the New York Times recently called out as a walking conflict of interest.

You’ll Hear “National AI Standard.” That’s Fake News. IT’s Silicon valley’s wild west

Supporters of the EO claim Trump is “setting a national framework for AI.” Read it yourself. You won’t find a single policy on:
– AI systems stealing copyrights (already proven in court against Anthropic and Meta)
– AI systems inducing self-harm in children
– Whether Google can build a water‑burning data center or nuclear plant next to your neighborhood 

None of that is addressed. Instead, the EO orders the federal government to sue and bully states like Florida and Texas that pass AI safety laws and threatens to cut off broadband funding unless states abandon their democratically enacted protections. They will call this “preemption” which is when federal law overrides conflicting state laws. When Congress (or sometimes a federal agency) occupies a policy area, states lose the ability to enforce different or stricter rules. There is no federal legislation (EOs don’t count), so there can be no “preemption.”

Who Really Wrote This? The Sacks–Thierer Pipeline

This EO reads like it was drafted directly from the talking points of David Sacks and Adam Thierer, the two loudest voices insisting that states must be prohibited from regulating AI.  It sounds that way because it was—Trump himself gave all the credit to David Sacks in his signing ceremony.

– Adam Thierer works at Google’s R Street Institute and pushes “permissionless innovation,” meaning companies should be allowed to harm the public before regulation is allowed. 
– David Sacks is a billionaire Silicon Valley investor from South Africa with hundreds of AI and crypto investments, documented by The New York Times, and stands to profit from deregulation.

Worse, the EO lards itself with references to federal agencies coordinating with the “Special Advisor for AI and Crypto,” who is—yes—David Sacks. That means DOJ, Commerce, Homeland Security, and multiple federal bodies are effectively instructed to route their AI enforcement posture through a private‑sector financier.

The Trump AI Czar—VICEROY Without Senate Confirmation

Sacks is exactly what we have been warning about for months: the unelected Trump AI Czar

He is not Senate‑confirmed. 
He is not subject to conflict‑of‑interest vetting. 
He is a billionaire “special government employee” with vast personal financial stakes in the outcome of AI deregulation. 

Under the Constitution, you cannot assign significant executive authority to someone who never faced Senate scrutiny. Yet the EO repeatedly implies exactly that.

Even Trump’s MOST LOYAL MAGA Allies Know This Is Wrong

Trump signed the order in a closed ceremony with sycophants and tech investors—not musicians, not unions, not parents, not safety experts, not even one Red State governor.

Even political allies and activists like Mike Davis and Steve Bannon blasted the EO for gutting state powers and centralizing authority in Washington while failing to protect creators. When Bannon and Davis are warning you the order goes too far, that tells you everything you need to know. Well, almost everything.

And Then There’s Ted Cruz

On top of everything else, the one state official in the room was U.S. Senator Ted Cruz of Texas, a state that has led on AI protections for consumers. Cruz sold out Texas musicians while gutting the Constitution—knowing full well exactly what he was doing as a former Supreme Court clerk.

Why It Matters for Musicians

AI isn’t some abstract “tech issue.” It’s about who controls your work, your rights, your economic future. Right now:

– AI systems train on our recordings without consent or compensation. 
– Major tech companies use federal power to avoid accountability. 
– The EO protects Silicon Valley elites, not artists, fans or consumers. 

This EO doesn’t protect your music, your rights, or your community. It preempts local protections and hands Big Tech a federal shield.

It’s Not a National Standard — It’s a Power Grab

What’s happening isn’t leadership. It’s *regulatory capture dressed as patriotism*. If musicians, unions, state legislators, and everyday Americans don’t push back, this EO will become a legal weapon used to silence state protections and entrench unaccountable AI power.

What David Sacks and his band of thieves is teaching the world is that he learned from Dot Bomb 1.0—the first time around, they didn’t steal enough. If you’re going to steal, steal all of it. Then the government will protect you.


Gene Simmons and the American Music Fairness Act

Gene Simmons is receiving Kennedy Center Honors with KISS this Sunday, and is also bringing his voice to the fair pay for radio play campaign to pass the American Music Fairness Act (AMFA).

Gene will testify on AMFA next week before the Senate Judiciary Committee. He won’t just be speaking as a member of KISS or as one of the most recognizable performers in American music. He’ll be showing up as a witness to something far more universal: the decades-long exploitation of recording artists whose work powers an entire broadcast industry and that has never paid them a dime. Watch Gene’s hearing on December 9th at 3pm ET at this link, when Gene testifies alongside SoundExchange CEO Mike Huppe.

As Gene argued in his Washington Post op-ed, the AM/FM radio loophole is not a quirky relic, it is legalized taking. Everyone else pays for music: streaming services, satellite radio, social-media platforms, retail, fitness, gaming. Everyone except big broadcast radio, which generated more than $13 billion in advertising revenue last year while paying zero to the performers whose recordings attract those audiences.

Gene is testifying not just for legacy acts, but for the “thousands of present and future American recording artists” who, like KISS in the early days, were told to work hard, build a fan base, and just be grateful for airplay. As he might put it, artists were expected to “rock and roll all night” — but never expect to be paid for it on the radio.

And when artists asked for change, they were told to wait. They “keep on shoutin’,” decade after decade, but Congress never listened.

That’s why this hearing matters. It’s the first Senate-level engagement with the issue since 2009. The ground is shifting. Gene Simmons’ presence signals something bigger: artists are done pretending that “exposure” is a form of compensation.

AMFA would finally require AM/FM broadcasters to pay for the sound recordings they exploit, the same way every other democratic nation already does. It would give session musicians, backup vocalists, and countless independent artists a revenue stream they should have had all along. It would even unlock international royalties currently withheld from American performers because the U.S. refuses reciprocity.

And let’s be honest: Gene Simmons is an ideal messenger. He built KISS from nothing, understands the grind, and knows exactly how many hands touch a recording before it reaches the airwaves. His testimony exposes the truth: radio isn’t “free promotion” — it’s a commercial business built on someone else’s work.

Simmons once paraphrased the music economy as a game where artists are expected to give endlessly while massive corporations act like the only “god of thunder,” taking everything and returning nothing. AMFA is an overdue correction to that imbalance.

When Gene sits down before the Senate Judiciary Committee, he won’t be wearing the makeup. He won’t need to. He’ll be carrying something far more powerful: the voices of artists who’ve waited 80 years for Congress to finally turn the volume up on fairness.

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.