Trump’s Historic Kowtow to Special Interests: Why Trump’s AI Executive Order Is a Threat to Musicians, States, and Democracy

There’s a new dance in Washington—it’s called the KowTow

Most musicians don’t spend their days thinking about executive orders. But if you care about your rights, your recordings, your royalties, or your community, or even the environment, you need to understand the Trump Administration’s new executive order on artificial intelligence. The order—presented as “Ensuring a National Policy Framework for AI”—is not a national standard at all. It is a blueprint for stripping states of their power, protecting Big Tech from accountability, and centralizing AI authority in the hands of unelected political operatives and venture capitalists. In other words, it’s business as usual for the special interests led by an unelected bureaucrat, Silicon Valley Viceroy and billionaire investor David Sacks who the New York Times recently called out as a walking conflict of interest.

You’ll Hear “National AI Standard.” That’s Fake News. IT’s Silicon valley’s wild west

Supporters of the EO claim Trump is “setting a national framework for AI.” Read it yourself. You won’t find a single policy on:
– AI systems stealing copyrights (already proven in court against Anthropic and Meta)
– AI systems inducing self-harm in children
– Whether Google can build a water‑burning data center or nuclear plant next to your neighborhood 

None of that is addressed. Instead, the EO orders the federal government to sue and bully states like Florida and Texas that pass AI safety laws and threatens to cut off broadband funding unless states abandon their democratically enacted protections. They will call this “preemption” which is when federal law overrides conflicting state laws. When Congress (or sometimes a federal agency) occupies a policy area, states lose the ability to enforce different or stricter rules. There is no federal legislation (EOs don’t count), so there can be no “preemption.”

Who Really Wrote This? The Sacks–Thierer Pipeline

This EO reads like it was drafted directly from the talking points of David Sacks and Adam Thierer, the two loudest voices insisting that states must be prohibited from regulating AI.  It sounds that way because it was—Trump himself gave all the credit to David Sacks in his signing ceremony.

– Adam Thierer works at Google’s R Street Institute and pushes “permissionless innovation,” meaning companies should be allowed to harm the public before regulation is allowed. 
– David Sacks is a billionaire Silicon Valley investor from South Africa with hundreds of AI and crypto investments, documented by The New York Times, and stands to profit from deregulation.

Worse, the EO lards itself with references to federal agencies coordinating with the “Special Advisor for AI and Crypto,” who is—yes—David Sacks. That means DOJ, Commerce, Homeland Security, and multiple federal bodies are effectively instructed to route their AI enforcement posture through a private‑sector financier.

The Trump AI Czar—VICEROY Without Senate Confirmation

Sacks is exactly what we have been warning about for months: the unelected Trump AI Czar

He is not Senate‑confirmed. 
He is not subject to conflict‑of‑interest vetting. 
He is a billionaire “special government employee” with vast personal financial stakes in the outcome of AI deregulation. 

Under the Constitution, you cannot assign significant executive authority to someone who never faced Senate scrutiny. Yet the EO repeatedly implies exactly that.

Even Trump’s MOST LOYAL MAGA Allies Know This Is Wrong

Trump signed the order in a closed ceremony with sycophants and tech investors—not musicians, not unions, not parents, not safety experts, not even one Red State governor.

Even political allies and activists like Mike Davis and Steve Bannon blasted the EO for gutting state powers and centralizing authority in Washington while failing to protect creators. When Bannon and Davis are warning you the order goes too far, that tells you everything you need to know. Well, almost everything.

And Then There’s Ted Cruz

On top of everything else, the one state official in the room was U.S. Senator Ted Cruz of Texas, a state that has led on AI protections for consumers. Cruz sold out Texas musicians while gutting the Constitution—knowing full well exactly what he was doing as a former Supreme Court clerk.

Why It Matters for Musicians

AI isn’t some abstract “tech issue.” It’s about who controls your work, your rights, your economic future. Right now:

– AI systems train on our recordings without consent or compensation. 
– Major tech companies use federal power to avoid accountability. 
– The EO protects Silicon Valley elites, not artists, fans or consumers. 

This EO doesn’t protect your music, your rights, or your community. It preempts local protections and hands Big Tech a federal shield.

It’s Not a National Standard — It’s a Power Grab

What’s happening isn’t leadership. It’s *regulatory capture dressed as patriotism*. If musicians, unions, state legislators, and everyday Americans don’t push back, this EO will become a legal weapon used to silence state protections and entrench unaccountable AI power.

What David Sacks and his band of thieves is teaching the world is that he learned from Dot Bomb 1.0—the first time around, they didn’t steal enough. If you’re going to steal, steal all of it. Then the government will protect you.


NYT: Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

This image has an empty alt attribute; its file name is image-9.png

The New York Times published a sprawling investigation into David Sacks’s role as Trump’s A.I. and crypto czar. We’ve talked about David Sacks a few times on these pages. The Times’ piece is remarkable in scope and reporting: a venture capitalist inside the White House, steering chip policy, promoting deregulation, raising money for Trump, hosting administration events through his own podcast brand, and retaining hundreds of A.I. and crypto investments that stand to benefit from his policy work.

But for all its detail, the Times buried the lede.

The bigger story isn’t just ethics violations. or outright financial corruption. It’s that Sacks is simultaneously shaping and shielding the largest regulatory power grab in history: the A.I. moratorium and its preemption structure.

Of all the corrupt anecdotes in the New York Times must read article regarding Viceroy and leading Presidential pardon candidate David Sacks, they left out the whole AI moratorium scam, focusing instead on the more garden variety of self-dealing and outright conflicts of interest that are legion. My bet is that Mr. Sacks reeks so badly that it is hard to know what to leave out. Here’s a couple of examples:

This image has an empty alt attribute; its file name is image-10.png

There is a deeper danger that the Times story never addresses: the long-term damage that will outlive David Sacks himself. Even if Sacks eventually faces investigations or prosecution for unrelated financial or securities matters — if he does — the real threat isn’t what happens to him. It’s what happens to the legal architecture he is building right now.

This image has an empty alt attribute; its file name is sacks-american-flag.jpg

If he succeeds in blocking state-law prosecutions and freezing A.I. liability for a decade, the harms won’t stop when he leaves office. They will metastasize.

Without state enforcement, A.I. companies will face no meaningful accountability for:

  • child suicide induced by unregulated synthetic content
  • mass copyright theft embedded into permanent model weights
  • biometric and voiceprint extraction without consent
  • data-center sprawl that overwhelms local water, energy, and zoning systems
  • surveillance architectures exported globally
  • algorithmic harms that cannot be litigated under preempted state laws

These harms don’t sunset when an administration ends. They calcify. It must also be said that Sacks could face state securities-law liability — including fraud, undisclosed self-dealing, and market-manipulative conflicts tied to his A.I. portfolio — because state blue-sky statutes impose duties possibly stricter than federal law. The A.I. moratorium’s preemption would vaporize these claims, shielding exactly the conduct state regulators are best positioned to police. No wonder he’s so committed to sneaking it into federal law.

The moratorium Sacks is pushing would prevent states from acting at the very moment when they are the only entities with the political will and proximity to regulate A.I. on the ground. If he succeeds, the damage will last long after Sacks has left his government role — long after his podcast fades, long after his investment portfolio exits, long after any legal consequences he might face.

The public will be living inside the system he designed.

There is one final point the public needs to understand. DavidSacksis not an anomaly. Sacks is to Trump what Eric Schmidt was to Biden: the industry’s designated emissary, embedded inside the White House to shape federal technology policy from the inside out. Swap the party labels and the personnel change, but the structural function remains the same. Remember, Schmidt bragged about writing the Biden AI executive order.

This image has an empty alt attribute; its file name is of-all-the-ceos-google-interviewed-eric-schmidt-was-the-only-one-that-had-been-to-burning-man-which-was-a-major-plus.jpg

So don’t think that if Sacks is pushed out, investigated, discredited, or even prosecuted one day — if he is — that the problem disappears. You don’t eliminate regulatory capture by removing the latest avatar of it. The next administration will simply install a different billionaire with a different portfolio and the same incentives: protect industry, weaken oversight, preempt the states, and expand the commercial reach of the companies they came in with.

The danger is not David Sacks the individual. The danger is the revolving door that lets tech titans write national A.I. policy while holding the assets that benefit from it. As much as Trump complains of the “deep state,” he’s doing his best to create the deepest of deep states.

Until that underlying structure changes, it won’t matter whether it’s Sacks, Schmidt, Thiel, Musk, Palihapitiya, or the next “technocratic savior.”

The system will keep producing them — and the public will keep paying the price. For as Sophocles taught us, it is not in our power to escape the curse.

It’s Back: The National Defense Authorization Act Is No Place for a Backroom AI Moratorium

David Sacks Is Bringing Back the AI Moratorium

WHAT’S AT STAKE

The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.

Ted Turns Texas Glowing

99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:

It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.

Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.

Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.

The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”

Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.

In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).

This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.

The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.

AI Moratorium Lobbying Explainer for Your Electric Bill

Here are the key shilleries pushing the federal AI moratorium and their backers:

Lobby Shop / OrganizationSupporters / FundersRole in Pushing MoratoriumNotes
INCOMPAS / AI Competition Center (AICC)Amazon, Google, Meta, Microsoft, telecom/cloud companiesLeads push for 10-year state-law preemption; argues moratorium prevents ‘patchwork’ lawsIdentified as central industry driver
Consumer Technology Association (CTA)Big Tech, electronics & platform economy firmsLobbying for federal preemption; opposed aggressive state AI lawsHigh influence with Commerce/Appropriations staff
American Edge ProjectMeta-backed advocacy orgFrames preemption as necessary for U.S. competitiveness vs. China; backed moratoriumUsed as indirect political vehicle for Meta
Abundance InstituteTech investors, deregulatory donorsArgues moratorium necessary for innovation; publicly predicts return of moratoriumMessaging aligns with Silicon Valley VCs
R Street InstituteMarket-oriented donors; tech-aligned fundersOriginated ‘learning period’ moratorium concept in 2024 papers by Adam ThiererNot a lobby shop but provides intellectual framework
Corporate Lobbyists (Amazon/Google/Microsoft/Meta/OpenAI/etc.)Internal lobbying shops + outside firmsPromote ‘uniform national standards’ in Congressional meetingsOperate through and alongside trade groups

PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS

The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.

An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.

THE TIMING COULD NOT BE MORE INSULTING

Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.

And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.

States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.

A CALL TO ACTION

Tell your Members of Congress:
No AI moratorium in the NDAA.
No backroom preemption.
No Big Tech giveaways in the defense budget.

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

Artist Rights Are Innovation, Too! White House Opens AI Policy RFI and Artists Should Be Heard

The White House has opened a major Request for Information (RFI) on the future of artificial intelligence regulation — and anyone can submit a comment. That means you. This is not just another government exercise. It’s a real opportunity for creators, musicians, songwriters, and artists to make their voices heard in shaping the laws that will govern AI and its impact on culture for decades to come.

Too often, artists find out about these processes after the decisions are already made. This time, we don’t have to be left out. The comment period is open now, and you don’t need to be a lawyer or a lobbyist to participate — you just need to care about the future of your work and your rights. Remember—property rights are innovation, too, just ask Hernando de Soto (Mystery of Capital) or any honest economist.

Here are four key issues in the RFI that matter deeply to artists — and why your voice is critical on each:


1. Transparency and Provenance: Artists Deserve to Know When Their Work Is Used

One of the most important questions in the RFI asks how AI companies should document and disclose the creative works used to train their models. Right now, most platforms hide behind trade secrets and refuse to reveal what they ingested. For artists, that means you might never know if your songs, photographs, or writing were taken without permission — even if they now power billion-dollar AI products.

This RFI is a chance to demand real provenance requirements: records of what was used, when, and how. Without this transparency, artists cannot protect their rights or seek compensation. A strong public record of support for provenance could shape future rules and force platforms into accountability.


2. Derivative Works and AI Memory: Creativity Shouldn’t Be Stolen Twice

The RFI also raises a subtle but crucial issue: even if companies delete unauthorized copies of works from their training sets, the models still retain and exploit those works in their weights and “memory.” This internal use is itself a derivative work — and it should be treated as one under the law.

Artists should urge regulators to clarify that training outputs and model weights built from copyrighted material are not immune from copyright. This is essential to closing a dangerous loophole: without it, platforms can claim to “delete” your work while continuing to profit from its presence inside their AI systems.


3. Meaningful Opt-Out: Creators Must Control How Their Work Is Used

Another critical question is whether creators should have a clear, meaningful opt-out mechanism that prevents their work from being used in AI training or generation without permission. As Artist Rights Institute and many others have demonstrated, “Robots.txt” disclaimers buried in obscure places are not enough. Artists need a legally enforceable system—not another worthless DMCA-style notice and notice and notice and notice and notice and maybe takedown system that platforms must respect and that regulators can audit.

A robust opt-out system would restore agency to creators, giving them the ability to decide if, when, and how their work enters AI pipelines. It would also create pressure on companies to build legitimate licensing systems rather than relying on theft.


4. Anti-Piracy Rule: National Security Is Not a License to Steal

Finally, the RFI invites comment on how national priorities should shape AI development and it’s vital that artists speak clearly here. There must be a bright-line rule that training AI models on pirated content is never excused by national security or “public interest” arguments. This is a real thing—pirate libraries are clearly front and center in AI litigation which have largely turned into piracy cases because the AI lab “national champions” steal books and everything else.

If a private soldier stole a carton of milk from a chow hall, he’d likely lose his security clearance. Yet some AI companies have built entire models on stolen creative works and now argue that government contracts justify their conduct. That logic is backwards. A nation that excuses intellectual property theft in the name of “security” corrodes the rule of law and undermines the very innovation it claims to protect. On top of it, the truth of the case is that the man Zuckerberg is a thief, yet he is invited to dinner at the White House.

A clear anti-piracy rule would ensure that public-private partnerships in AI development follow the same legal and ethical standards we expect of every citizen — and that creators are not forced to subsidize government technology programs with uncompensated labor. Any “AI champion” who steals should lose or be denied a security clearance.


Your Voice Matters — Submit a Comment

The White House needs to hear directly from creators — not just from tech companies and trade associations. Comments from artists, songwriters, and creative professionals will help shape how regulators understand the stakes and set the boundaries.

You don’t need legal training to submit a comment. Speak from your own experience: how unauthorized use affects your work, why transparency matters, what a meaningful opt-out would look like, and why piracy can never be justified by national security.

👉 Submit your comment here before the October 27 deadline.

Senator Cruz Joins the States on AI Safe Harbor Collapse— And the Moratorium Quietly Slinks Away

Silicon Valley Loses Bigly

In a symbolic vote that spoke volumes, the U.S. Senate decisively voted 99–1 to strike the toxic AI safe harbor moratorium from the vote-a-rama for the One Big Beautiful Bill Act (HR 1) according to the AP. Senator Ted Cruz, who had previously actively supported the measure, actually joined the bipartisan chorus in stripping it — an acknowledgment that the proposal had become politically radioactive.

To recap, the AI moratorium would have barred states from regulating artificial intelligence for up to 10 years, tying access to broadband and infrastructure funds to compliance. It triggered an immediate backlash: Republican governors, state attorneys general, parents’ groups, civil liberties organizations, and even independent artists condemned it as a blatant handout to Big Tech with yet another rent-seeking safe harbor.

Marsha Blackburn and Maria Cantwell to the Rescue

Credit where it’s due: Senator Marsha Blackburn (R–TN) was the linchpin in the Senate, working across the aisle with Sen. Maria Cantwell to introduce the amendment that finally killed the provision. Blackburn’s credibility with conservative and tech-wary voters gave other Republicans room to move — and once the tide turned, it became a rout. Her leadership was key to sending the signal to her Republican colleagues–including Senator Cruz–that this wasn’t a hill to die on.

Top Cover from President Trump?

But stripping the moratorium wasn’t just a Senate rebellion. This kind of reversal in must-pass, triple whip legislation doesn’t happen without top cover from the White House, and in all likelihood, Donald Trump himself. The provision was never a “last stand” issue in the art of the deal. Trump can plausibly say he gave industry players like Masayoshi Son, Meta, and Google a shot, but the resistance from the states made it politically untenable. It was frankly a poorly handled provision from the start, and there’s little evidence Trump was ever personally invested in it. He certainly didn’t make any public statements about it at all, which is why I always felt it was such an improbable deal point that it was always intended as a bargaining chip whether the staff knew it or not.

One thing is for damn sure–it ain’t coming back in the House which is another way you know you can stick a fork in it despite the churlish shillery types who are sulking off the pitch.

One final note on the process: it’s unfortunate that the Senate Parliamentarian made such a questionable call when she let the AI moratorium survive the Byrd Bath, despite it being so obviously not germane to reconciliation. The provision never should have made it this far in the first place — but oh well. Fortunately, the Senate stepped in and did what the process should have done from the outset.

Now what?

It ain’t over til it’s over. The battle with Silicon Valley may be over on this issue today, but that’s not to say the war is over. The AI moratorium may reappear, reshaped and rebranded, in future bills. But its defeat in the Senate is important. It proves that state-level resistance can still shape federal tech policy, even when it’s buried in omnibus legislation and wrapped in national security rhetoric.

Cruz’s shift wasn’t a betrayal of party leadership — it was a recognition that even in Washington, federalism still matters. And this time, the states — and our champion Marsha — held the line. 

Brava, madam. Well played.

This post first appeared on MusicTechPolicy

@Unite4Copyright: Say “No” to Unlicensed AI Training

The biggest of Big Tech are scraping everything they can snarf down to train their AI–that means your Facebook, Instagram, YouTube, websites, Reddit, the works. Congress has to stop this–if. you are as freaked out about this as we are, join in the Copyright Alliance letter campaign here. It just takes a minute to send a personalized letter to Congress and the White House urging policymakers to protect creators’ rights and ensure fair compensation in the AI era.