Don’t Sell What You Don’t Have: Why AB 1349’s Crackdown on Speculative Event Tickets Matters to Touring Artists and Fans

Update: AB 1349 passed the California Assembly, on to the Senate.

I rely on ticket revenue to pay my band and crew, and I depend on trust—between me and my fans—for my career to work at all. That’s why I support California’s AB 1349. At its core, this bill confronts one of the most corrosive practices in touring: speculative ticketing.

Speculative ticketing isn’t normal resale. It’s when sellers list tickets they don’t actually own and may never acquire. These listings often appear at inflated prices on reseller markets before tickets even go on sale, with no guarantee the seller can deliver the seat. In other words, it’s selling a promise, not a ticket. Fans may think they bought a ticket, but what they’ve really bought is a gamble that the reseller can later obtain the seat—usually at a lower price—and flip it to them while the reseller marketplace looks the other way.

Here’s how it works in practice. A reseller posts a listing, sometimes even a specific section, row, and seat, before they possess anything. The marketplace presents that listing like real inventory: seat maps, countdown timers, “only a few left” banners. That creates artificial scarcity before a single legitimate ticket has even been sold. Once tickets go on sale, the reseller tries to “cover” the sale—buying tickets during the onsale (often using bots or multiple accounts), buying from other resellers who did secure inventory, or substituting some “comparable” seat if the promised one doesn’t exist at an arbitrage price. If they can source lower than what they sold to the fan, they pocket the difference.

When that gamble fails, the risk gets dumped on the fan. Prices jump. Inventory really sells out. The reseller can’t deliver. What follows is a last-minute cancellation, a refund that arrives too late to help, a downgrade to worse seats, or a customer-service maze between the seller and the platform. Fans blame artists even if the artists had nothing to do with the arbitrage. I’ve seen fans get priced out because listings appeared online that had nothing to do with the actual onsale.   The reseller and the marketplace profit themselves while the fan, artist and venue suffer.

AB 1349 draws a bright-line rule that should have existed years ago: if you don’t actually have the ticket—or a contractual right to sell it—you can’t list it. That single principle collapses the speculative model. You can’t post phantom seats or inflate prices using imaginary inventory. It doesn’t ban resale. It doesn’t cap prices. It does stop a major source of fraud.

The bill also tackles the deception that makes speculative ticketing profitable. Fake “sold out” claims, copycat websites that look like official artist or venue pages, and listings that bury or hide face value all push fans into rushed, fear-based decisions. AB 1349 requires transparency about whether a ticket is a resale, what the original face price was, and what seat is actually being offered. That information lets fans make rational choices—and it reduces the backlash that inevitably lands on performers and venues when fans feel tricked.

Bots and circumvention tools are another part of the speculative pipeline. Artists and venues spend time and money designing fair onsales, presales for fan clubs, and purchase limits meant to spread tickets across real people. Automated systems that evade those limits defeat the entire purpose, feeding inventory into speculative listings within seconds. AB 1349 doesn’t outlaw resale; it targets the deliberate technological abuse that turns live music into a high-speed extraction game.

I also support the bill’s enforcement structure. This isn’t about turning fans into litigants or flooding courts. It’s about giving public enforcers real tools to police a market that has repeatedly shown it won’t self-regulate.

AB 1349 won’t fix everything overnight. But by stopping people from selling what they don’t have, it moves ticketing back toward a system built on possession, truth, and accountability. If every state prohibited speculative ticketing, it would largely disappear because resale would finally be backed by real inventory. For fans who just want to see the music they love—that’s not radical. It’s essential.

[This post first appeared on Hypebot]

Victory in the Vetter v. Resnik Lawsuit: Artist Rights, Songwriter Advocacy, and the Power of Termination

At the center of Vetter v. Resnik are songwriters reclaiming what Congress promised them in a detailed and lengthy legislative negotiation over the 1976 revision to the Copyright Act—a meaningful second chance to terminate what the courts call “unremunerative transfers,” aka crappy deals. That principle comes into sharp focus through Cyril Vetter, whose perseverance brought this case to the Fifth Circuit, and Cyril’s attorney Tim Kappel, whose decades-long advocacy for songwriter rights helped frame the issues not as abstractions, but as lived realities.

Cyril won his case against his publisher at trial in a landmark judicial ruling by Chief Judge Shelly Dick. His publisher appealed Judge Dick’s ruling to the Fifth Circuit. As readers will remember, oral arguments in the case were earlier this year. A bunch of songwriter and author groups including the Artist Rights Institute filed “friend of the court” briefs in the case in favor of Cyril.

In a unanimous opinion, the United States Court of Appeals for the Fifth Circuit affirmed Judge Shelly Dick’s carefully reasoned trial-court ruling, holding that when an author terminates a worldwide grant, the recapture also worldwide. It is not artificially limited to U.S. territory only, which had been the industry practice. The court understood that anything less would hollow out Congress’s intent.

It is often said that the whole point of the termination law is to give authors (including songwriters) a “second bite at the apple”. Which is why the Artist Rights Institute wrote (and was quoted by the 5th Circuit) that limiting the reversion to US rights only is a “second bite at half the apple” which was the opposite of Congressional intent.

25-30108-2026-01-12Download



What made this 5th Circuit decision especially meaningful for the creative community is that the Fifth Circuit did not reach it in a vacuum. Writing for the panel, Judge Carl Stewart expressly quoted the Artist Rights Institute amicus brief, observing:

“Denying terminating authors the full return of a worldwide grant leaves them with only half of the apple—the opposite of congressional intent.”

That sentence—simple, vivid, and unmistakably human—captured what this case has always been about.

ARI.Amicus.Vetter.Final RDownload



The Artist Rights Institute’s amicus brief did not appear overnight. It grew out of a longstanding relationship between songwriter advocate Tim Kappel and Chris Castle, a collaboration shaped over many years by shared concern for how statutory rights actually function—or fail to function—for creators in the real world.

When the Vetter appeal crystallized the stakes, that history mattered. It allowed ARI to move quickly, confidently, and with credibility—translating dense statutory language into a narrative to help courts understand that termination rights are supposed to restore leverage, not preserve a publisher’s foreign control veto through technicalities.

Crucially, the brief was inspired and strengthened by the voices of songwriter advocates and heirs, including Abby North (heir of composer Alex North), Blake Morgan (godson of songwriter Lesley Gore), and Angela Rose White (heir of legendary music director David Rose) and of course David Lowery and Nikki Rowling. The involvement of these heirs ensured the court understood context—termination is not merely about renegotiating deals for living authors. It is often about families, estates, and heirs—people for whom Congress explicitly preserved termination rights as a matter of intergenerational fairness.

The Fifth Circuit’s opinion reflects that understanding. By rejecting a cramped territorial reading of termination, the court avoided a result that would have undermined heirs’ rights just as surely as authors’ rights.

Vetter v. Resnik represents a rare and welcome alignment: an author willing to press his statutory rights all the way, advocates who understood the lived experience behind those rights, a district judge who took Congress at its word, and an appellate court willing to say plainly that “half of the apple” is not enough.

For the Artist Rights Institute, it was an honor to participate—to stand alongside Cyril Vetter, Tim Kappel, and the community of songwriter advocates and heirs whose experiences shaped a brief that helped the court see the full picture.

And for artists, songwriters, and their families, the decision stands as a reminder that termination rights mean what Congress said they mean—a real chance to reclaim ownership, not an illusion bounded by geography.

@ArtistRights Institute Newsletter 01/05/26: Grok Can’t Control Itself, CRB V Starts, Data Center Rebellion, Sarah Wynn-Williams Senate Testimony, Copyright Review

Artist Rights Institute logo - Artist Rights Weekly newsletter

Phonorecords V Commencement Notice: Government setting song mechanical royalty rates

The Copyright Royalty Judges announce the commencement of a proceeding to determine reasonable rates and terms for making and distributing phonorecords for the period beginning January 1, 2028, and ending December 31, 2032. Parties wishing to participate in the rate determination proceeding must file their Petition to Participate and the accompanying $150 filing fee no later than 11:59 p.m. eastern time on January 30, 2026. Deets here.

US Mechanical Rate Increase

Songwriters Will Get Paid More for Streaming Royalties Starting Today (Erinn Callahan/AmericanSongwriter)

CRB Sets 2026 Mechanical Rate at 13.1¢ (Chris Castle/MusicTechPolicy)

Spotify’s Hack by Anna’s Archive

No news. Biggest music hack in history still stolen.

MLC Redesignation

The MMA’s Unconstitutional Unclaimed Property Preemption: How Congress Handed Protections to Privatize Escheatment (Chris Castle/MusicTechPolicy)

Under the Radar: Data Center Grass Roots Rebellion

Data Center Rebellion (Chris Castle/MusicTechSolutions)

The Data Center Rebellion is Here and It’s Reshaping the Political Landscape (Washington Post)

Residents protest high-voltage power lines that could skirt Dinosaur Valley State Park (ALEJANDRA MARTINEZ AND PAUL COBLER/Texas Tribune)

US Communities Halt $64B Data Center Expansions Amid Backlash (Lucas Greene/WebProNews)

Big Tech’s fast-expanding plans for data centers are running into stiff community opposition (Marc Levy/Associated Press)

Data center ‘gold rush’ pits local officials’ hunt for new revenue against residents’ concerns (Alander Rocha/Georgia Record)

AI Policy

Meet the New AI Boss, Worse Than the Old Internet Boss (Chris Castle/MusicTechPolicy)

Deloitte’s AI Nightmare: Top Global Firm Caught Using AI-Fabricated Sources to Support its Policy Recommendations (Hugh Stephens/Hugh Stephens Blog)

Grok Can’t Stop AI Exploitation of Women

Facebook/Meta Whistleblower Testifies at US Senate

Copyright Case 2025 Review

Year in Review: The U.S. Copyright Office (George Thuronyi/Library of Congress)

Copyright Cases: 2025 Year in Review (Rachel Kim/Copyright Alliance)

AI copyright battles enter pivotal year as US courts weigh fair use (Blake Brittain/Reuters)

What Don Draper Knew That AI Forgot: Authorship, Ownership, and Advertising

David is pointing to a quiet but serious problem hiding behind the rush to use generative AI in advertising, film, and television: copyright law protects authorship, not outputs. AI muddies or even erases authorship altogether in some cases

Under current U.S. Copyright Office guidance, works generated primarily by AI are often not registrable in the Copyright Office because they lack a human author exercising creative control. That means a brand that relies on AI to generate a commercial may not actually own exclusive rights in the finished work. If someone copies, remixes, or repurposes that ad, even in a way that damages the brand, the company may have little or no legal recourse under copyright law.

The Copyright Office guidance says:

In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans. The Office’s registration policies and regulations reflect statutory and judicial guidance on this issue….If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user

David has not identified a theoretical risk. Copyright is the backbone of brand control in media. It’s what allows companies to stop misuse, dilution, parody-turned-weapon, or hostile appropriation. In the US, a copyright registration is required to protect those rights. Remove that protection, and brands are left relying on weaker tools like trademark or unfair competition law, which are narrower, slower, and often ill-suited to digital remix culture.

David’s warning extends beyond ads. Film and TV studios experimenting with AI-generated scripts, scenes, music, or visuals may be undermining their own ability to control, license, or defend those works. In trying to save money upfront, they may be giving up the legal leverage that protects their brand, reputation, and long-term value.

Meet the New AI Boss, Worse Than the Old Internet Boss

Congress is considering several legislative packages to regulate AI. AI is a system that was launched globally with no safety standards, no threat modeling, and no real oversight. A system that externalized risk onto the public, created enormous security vulnerabilities, and then acted surprised when criminals, hostile states, and bad actors exploited it.

After the damage was done, the same companies that built it told governments not to regulate—because regulation would “stifle innovation.” Instead, they sold us cybersecurity products, compliance frameworks, and risk-management services to fix the problems they created.

Yes, artificial intelligence is a problem. Wait…Oh, no sorry. That’s not AI.

That’s was Internet. And it made the tech bros the richest ruling class in history.

And that’s why some of us are just a little skeptical when the same tech bros are now telling us: “Trust us, this time will be different.” AI will be different, that’s for sure. They’ll get even richer and they’ll rip us off even more this time. Not to mention building small nuclear reactors on government land that we paid for, monopolizing electrical grids that we paid for, and expecting us to fill the landscape with massive power lines that we will pay for.

The topper is that these libertines want no responsibility for anything, and they want to seize control of the levers of government to stop any accountability. But there are some in Congress who are serious about not getting fooled again.

Senator Marsha Blackburn released a summary of legislation she is sponsoring that gives us some cause for hope (read it here courtesy of our friends at the Copyright Alliance). Because her bill might be effective, that means Silicon Valley shills will be all over it to try to water it down and, if at all possible, destroy it. That attack of the shills has already started with Silicon Valley’s AI Viceroy in the Trump White House, a guy you may never have heard of named David Sacks. Know that name. Beware that name.

Senator Blackburn’s bill will do a lot of good things, including for protecting copyright. But the first substantive section of Senator Blackburn’s summary is a game changer. She would establish an obligation on AI platforms to be responsible for known or predictable harm that can befall users of AI products. This is sometimes called a “duty of care.”

Her summary states:

Place a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users. Additionally, this section requires:

• AI platforms to conduct regular risk assessments of how algorithmic systems, engagement mechanics, and data practices contribute to psychological, physical, financial, and exploitative harms.

• The Federal Trade Commission (FTC) to promulgate rules establishing minimum reasonable safeguards.

At its core, Senator Blackburn’s AI bill tries to force tech companies to play by rules that most other industries have followed for decades: if you design a product that predictably harms people, you have a responsibility to fix it.

That idea is called “products liability.” Simply put, it means companies can’t sell dangerous products and then shrug it off when people get hurt. Sounds logical, right? Sounds like what you would expect would happen if you did the bad thing? Car makers have to worry about the famous exploding gas tanks. Toy manufacturers have to worry about choking hazards. Drug companies have to test side effects. Tobacco companies….well, you know the rest. The law doesn’t demand perfection—but it does demand reasonable care and imposes a “duty of care” on companies that put dangerous products into the public.

Blackburn’s bill would apply that same logic to AI platforms. Yes, the special people would have to follow the same rules as everyone else with no safe harbors.

Instead of treating AI systems as abstract “speech” or neutral tools, the bill treats them as what they are: products with design choices. Those choices that can foreseeably cause psychological harm, financial scams, physical danger, or exploitation. Recommendation algorithms, engagement mechanics, and data practices aren’t accidents. They’re engineered. At tremendous expense. One thing you can be sure of is that if Google’s algorithms behave a certain way, it’s not because the engineers ran out of development money. The same is true of ChatGPT, Grok, etc. On a certain level of reality, this is very likely not guess work or predictability. It’s “known” rather than “should have known.” These people know exactly what their algorithms do. And they do it for the money.

The bill would impose that duty of care on AI developers and platform operators. A duty of care is a basic legal obligation to act reasonably to prevent foreseeable harm. “Foreseeable” doesn’t mean you can predict the exact victim or moment—it means you can anticipate the type of harm that flows to users you target from how the system is built.

To make that duty real, the bill would require companies to conduct regular risk assessments and make them public. These aren’t PR exercises. They would have to evaluate how their algorithms, engagement loops, and data use contribute to harms like addiction, manipulation, fraud, harassment, and exploitation.

They do this already, believe it. What’s different is that they don’t make it public, anymore than Ford made public the internal research that the Pinto’s gas tank was likely to explode. In other words, platforms would have to look honestly at what their systems actually do in the world—not just what they claim to do.

The bill also directs the Federal Trade Commission (FTC) to write rules establishing minimum reasonable safeguards. That’s important because it turns a vague obligation (“be responsible”) into enforceable standards (“here’s what you must do at a minimum”). Think of it as seatbelts and crash tests for AI systems.

So why do tech companies object? Because many of them argue that their algorithms are protected by the First Amendment—that regulating how recommendations work is regulating speech. Yes, that is a load of crap. It’s not just you, it really is BS.

Imagine Ford arguing that an exploding gas tank was “expressive conduct”—that drivers chose the Pinto to make a statement, and therefore safety regulation would violate Ford’s free speech rights. No court would take that seriously. A gas tank is not an opinion. It’s an engineered component with known risks and risks that were known to the manufacturer.

AI platforms are the same. When harm flows from design decisions—how content is ranked, how users are nudged, how systems optimize for engagement—that’s not speech. That’s product design. You can measure it, test it, audit it, which they do and make it safer which they don’t.

This part of Senator Blackburn’s bill matters because platform design shapes culture, careers, and livelihoods. Algorithms decide what gets seen, what gets buried, and what gets exploited. Blackburn’s bill doesn’t solve every problem, but it takes an important step: it says tech companies can’t hide dangerous products behind free-speech rhetoric anymore.

If you build it, and it predictably hurts people, you’re responsible for fixing it. That’s not censorship. It’s accountability. And people like Marc Andreessen, Sam Altman, Elon Musk and David Sacks will hate it.

Gene Simmons and the American Music Fairness Act

Gene Simmons is receiving Kennedy Center Honors with KISS this Sunday, and is also bringing his voice to the fair pay for radio play campaign to pass the American Music Fairness Act (AMFA).

Gene will testify on AMFA next week before the Senate Judiciary Committee. He won’t just be speaking as a member of KISS or as one of the most recognizable performers in American music. He’ll be showing up as a witness to something far more universal: the decades-long exploitation of recording artists whose work powers an entire broadcast industry and that has never paid them a dime. Watch Gene’s hearing on December 9th at 3pm ET at this link, when Gene testifies alongside SoundExchange CEO Mike Huppe.

As Gene argued in his Washington Post op-ed, the AM/FM radio loophole is not a quirky relic, it is legalized taking. Everyone else pays for music: streaming services, satellite radio, social-media platforms, retail, fitness, gaming. Everyone except big broadcast radio, which generated more than $13 billion in advertising revenue last year while paying zero to the performers whose recordings attract those audiences.

Gene is testifying not just for legacy acts, but for the “thousands of present and future American recording artists” who, like KISS in the early days, were told to work hard, build a fan base, and just be grateful for airplay. As he might put it, artists were expected to “rock and roll all night” — but never expect to be paid for it on the radio.

And when artists asked for change, they were told to wait. They “keep on shoutin’,” decade after decade, but Congress never listened.

That’s why this hearing matters. It’s the first Senate-level engagement with the issue since 2009. The ground is shifting. Gene Simmons’ presence signals something bigger: artists are done pretending that “exposure” is a form of compensation.

AMFA would finally require AM/FM broadcasters to pay for the sound recordings they exploit, the same way every other democratic nation already does. It would give session musicians, backup vocalists, and countless independent artists a revenue stream they should have had all along. It would even unlock international royalties currently withheld from American performers because the U.S. refuses reciprocity.

And let’s be honest: Gene Simmons is an ideal messenger. He built KISS from nothing, understands the grind, and knows exactly how many hands touch a recording before it reaches the airwaves. His testimony exposes the truth: radio isn’t “free promotion” — it’s a commercial business built on someone else’s work.

Simmons once paraphrased the music economy as a game where artists are expected to give endlessly while massive corporations act like the only “god of thunder,” taking everything and returning nothing. AMFA is an overdue correction to that imbalance.

When Gene sits down before the Senate Judiciary Committee, he won’t be wearing the makeup. He won’t need to. He’ll be carrying something far more powerful: the voices of artists who’ve waited 80 years for Congress to finally turn the volume up on fairness.

It’s Back: The National Defense Authorization Act Is No Place for a Backroom AI Moratorium

David Sacks Is Bringing Back the AI Moratorium

WHAT’S AT STAKE

The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.

Ted Turns Texas Glowing

99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:

It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.

Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.

Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.

The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”

Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.

In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).

This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.

The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.

AI Moratorium Lobbying Explainer for Your Electric Bill

Here are the key shilleries pushing the federal AI moratorium and their backers:

Lobby Shop / OrganizationSupporters / FundersRole in Pushing MoratoriumNotes
INCOMPAS / AI Competition Center (AICC)Amazon, Google, Meta, Microsoft, telecom/cloud companiesLeads push for 10-year state-law preemption; argues moratorium prevents ‘patchwork’ lawsIdentified as central industry driver
Consumer Technology Association (CTA)Big Tech, electronics & platform economy firmsLobbying for federal preemption; opposed aggressive state AI lawsHigh influence with Commerce/Appropriations staff
American Edge ProjectMeta-backed advocacy orgFrames preemption as necessary for U.S. competitiveness vs. China; backed moratoriumUsed as indirect political vehicle for Meta
Abundance InstituteTech investors, deregulatory donorsArgues moratorium necessary for innovation; publicly predicts return of moratoriumMessaging aligns with Silicon Valley VCs
R Street InstituteMarket-oriented donors; tech-aligned fundersOriginated ‘learning period’ moratorium concept in 2024 papers by Adam ThiererNot a lobby shop but provides intellectual framework
Corporate Lobbyists (Amazon/Google/Microsoft/Meta/OpenAI/etc.)Internal lobbying shops + outside firmsPromote ‘uniform national standards’ in Congressional meetingsOperate through and alongside trade groups

PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS

The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.

An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.

THE TIMING COULD NOT BE MORE INSULTING

Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.

And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.

States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.

A CALL TO ACTION

Tell your Members of Congress:
No AI moratorium in the NDAA.
No backroom preemption.
No Big Tech giveaways in the defense budget.

What We Know—and Don’t Know—About Spotify and NMPA’s “Opt-In” Audiovisual Deal

When Spotify and the National Music Publishers’ Association (NMPA) announced an “opt-in” audiovisual licensing portal this month, the headlines made it sound like a breakthrough for independent songwriters. In reality, what we have is a bare-bones description of a direct-license program whose key financial and legal terms remain hidden from view.

Here’s what we do know. The portal (likely an HFA extravaganza) opened on November 11, 2025 and will accept opt-ins through December 19. Participation is limited to NMPA member publishers, and the license covers U.S. audiovisual uses—that is, music videos and other visual elements Spotify is beginning to integrate into its platform. It smacks of the side deal on pending and unmatched tied to frozen mechanicals that the CRB rejected in Phonorecords IV.

Indeed, one explanation for the gun decked opt-in period is in The Desk:

Spotify is preparing to launch music videos in the United States, expanding a feature that has been in beta in nearly 100 international markets since January, the company quietly confirmed this week.

The new feature, rolling out to Spotify subscribers in the next few weeks, will allow streaming audio fans to watch official music videos directly within the Spotify app, setting the streaming platform in more direct competition with YouTube.

The company calls it a way for indies to share in “higher royalties,” but no rates, formulas, or minimum guarantees have been disclosed so it’s hard to know “higher” compared to what? Yes, it’s true that if you evan made another 1¢ that would be “higher”—and in streaming-speak, 1¢ is big progress, but remember that it’s still a positive number to the right of the decimal place preceded by a zero.

The deal sits alongside Spotify’s major-publisher audiovisual agreements, which are widely believed to include large advances and broader protections—none of which apply here. There’s also an open question of whether the majors granted public performance rights as an end run around PROs, which I fully expect. There’s no MFN clause, no public schedule, and no audit details. I would be surprised if Spotify agreed to be audited by an independent publisher and even more surprised if the announced publishers with direct deals did not have an audit right. So there’s one way we can be pretty confident this is not anything like MFN terms aside from the scrupulous avoidance of mentioning the dirty word: MONEY.

But it would be a good guess that Spotify is interested in this arrangement because it fills out some of the most likely plaintiffs to protect them when they launch their product with unlicensed songs or user generated videos and no Content ID clone (which is kind of Schrödinger’s UGC—not expressly included in the deal but not expressly excluded either, and would be competitive with TikTok or Spotify nemesis YouTube).

But here’s what else we don’t know: how much these rights are worth, how royalties will be calculated, whether they include public performances to block PRO licensing of Spotify A/V (and which could trigger MFN problems with YouTube or other UGC services) and whether the December 19 date marks the end of onboarding—or the eve of a US product launch. And perhaps most importantly, how is it that NMPA is involved, the NMPA which has trashed Spotify far and wide over finally taking advantage of the bundling rates negotiated in the CRB (indeed in some version since 2009). Shocked, shocked that there’s bundling going on.

It’s one thing to talk about audiovisual covering “official” music videos and expressly stating that the same license will not be used to cover UGC no way, no how. Given Spotify’s repeated hints that full-length music videos are coming to the U.S. and the test marketing reported by The Desk and disclosed by Spotify itself, the absolute silence of the public statements about royalty rates and UGC, as well as the rush to get publishers to opt in before year-end all suggest that rollout is imminent. Until Spotify and the NMPA release the actual deal terms, though, we’re all flying blind—sheep being herded toward an agreement cliff we can’t fully see.

[A version of this post first appeared on MusicTechPolicy]

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.