The Growing Backlash Against AI Data Centers: Local Resistance and the Infrastructure Crunch

As we’ve reported many times, communities across the US are increasingly pushing back against the explosive growth of AI-driven data centers. Major concerns include skyrocketing electricity demand, massive water consumption for cooling, noise pollution from giant fans, loss of prime agricultural and residential land, and rising utility bills passed on to local residents. As of May 2026, independent trackers report approximately 69–78 U.S. jurisdictions that have enacted bans, restrictions, or moratoriums on new data centers. Many of these measures also target the new high-voltage transmission lines required to power them.

This wave of resistance highlights a deepening tension between the rapid expansion of AI infrastructure and local priorities around quality of life, sustainability, and community control.

1. Michigan: The Epicenter of Local Moratoriums

I think you could safely say that Michigan currently leads the nation in local opposition to data center construction, largely triggered by the controversial $16+ billion OpenAI-Oracle Stargate AI data center project in Saline Township, Washtenaw County. Despite a 4-1 township planning commission vote against rezoning and strong resident protests, the Stargate construction project advanced through legal channels, igniting widespread defensive actions across the state.

  • More than 50 communities (cities and townships) have enacted temporary moratoriums, covering roughly 1,500 square miles — an area comparable to the size of Rhode Island.
  • Between 25 and 51 active local moratoriums are in place as of early 2026.
  • State lawmakers have introduced bills (HB 5594–5596) calling for a one-year statewide pause on new hyperscale data centers, along with stricter rules on water and electricity connections.
  • Some utilities, such as Ypsilanti, have imposed their own 12-month bans on water hookups for large AI facilities—but that will eventually expire.

Key issues in Michigan should sound familiar: massive water usage, strain on the electrical grid, and the loss of local zoning authority.

2. Virginia: Transmission Line Battles in “Data Center Alley”

Virginia is home to the highest concentration of data centers in the United States (over 550 facilities), particularly in Northern Virginia. Opposition here focuses heavily on both the data centers and the extensive transmission lines needed to support them.

  • Strong protests in Loudoun, Prince William, Hanover, and other counties against new projects and expansions.
  • Major conflicts over high-voltage lines such as the Valley Link and Joshua Falls projects, which cross multiple counties and impact neighborhoods, historic sites, and conserved rural land.
  • Dominion Energy has faced repeated legal and community challenges regarding route selections.
  • Legislative debates continue over ending billions in tax incentives and studies projecting residential electricity rate increases of up to $37 per month by 2040.
Breakfast at Buck’s of Woodside—if you’re not at the table you are on the menu

3. Georgia: Statewide Pause Efforts Amid High Project Volume

Georgia has seen hundreds of announced data center projects, prompting both local and statewide responses.

  • Bills such as HB 1059 and HB 1012 propose temporary statewide pauses on new permitting (potentially until 2027–2028) to allow time for impact studies.
  • Several counties, including DeKalb and Camden, have passed moratoriums ranging from several months to a year while updating zoning ordinances.
  • Residents voice concerns about energy costs, water consumption, loss of land, and whether tax incentives truly benefit local communities.

Georgia’s combination of legislative proposals and county-level actions reflects growing resistance in a rapidly developing market.

4. North Carolina: Rising Local and Policy Pushback

North Carolina ranks among the top states for new moratorium activity as data center developers expand beyond traditional East Coast hubs.

  • Multiple counties and municipalities have passed restrictions or temporary moratoriums citing infrastructure strain, zoning issues, and community impacts.
  • Policy proposals such as HB 1063 seek to require hyperscale developers to fully cover the costs of power, water, and grid upgrades rather than passing them to ratepayers.
  • Growing focus on the environmental and visual effects of both data centers and supporting transmission lines.

North Carolina represents an emerging hotspot where early local actions may shape future statewide policy.

5. Indiana: County-Level Resistance and High-Stakes Conflicts

Indiana has seen intense localized opposition, particularly in rural counties.

  • Counties such as White and Fulton have enacted 6-to-12-month moratoriums to study impacts and strengthen local ordinances.
  • Trackers show at least 6 formal actions, with several others in discussion.
  • Primary concerns include the conversion of prime agricultural land, rising utility rates, and the industrialization of rural communities.

Indiana illustrates how even mid-sized proposals can trigger strong community responses and political tension.

Broader Implications and the Path Forward

The five most active states — Michigan, Virginia, Georgia, North Carolina, and Indiana — capture the national picture. Resistance is bipartisan, spans urban and rural areas, and increasingly includes opposition to the massive transmission lines that accompany data center projects.

Common themes include fears that data centers consume disproportionate amounts of power and water while shifting costs onto existing residents. Proponents argue these facilities bring jobs, tax revenue, and are essential for America’s AI competitiveness. Critics insist that growth must be responsible, with full cost recovery, better siting practices, efficiency standards, and genuine community input.

As AI demand continues to surge, this local “revolt” tests whether the physical infrastructure can scale fast enough without compromising quality of life and environmental goals. I think the national consensus is a big no.

Expect more moratoriums, ballot initiatives, legal battles, and negotiations in the coming months. The outcome will significantly influence not only the future of AI but also national energy policy and land-use planning for years to come.

A Subtle Shift in US AI Policy and Why Artists Should Pay Attention

Something is moving in Washington.

A recent report suggests that the Trump administration is considering a new executive order on artificial intelligence. On its face, that might sound like more of the same—another round of AI policy chatter promoted by David Sacks, the Silicon Valley lobbyist and billionaire investor who has been pushing a “don’t slow it down” approach.

But this time feels different.

Sacks appears to have gotten pushed out at least a bit. Don’t count the chickens just yet. But the shift in tone matters. And the timing matters even more. The order is reportedly being weighed ahead of Trump’s visit to China, where AI development has become a central axis of geopolitical competition.

That context changes the story. For the past several years, the dominant policy posture around AI has been simple: don’t slow down innovation. Because China.

That argument has been doing a lot of work. It has been used to wave away concerns about training data, to discourage state and local oversight of data center buildouts, and to greenlight massive infrastructure commitments—including dedicated nuclear power for AI campuses run by Google, Microsoft, Meta, and Amazon.

In other words: build the machine first. Deal with the consequences later.

Artists have been the raw material for that strategy.

Musicians, book authors, visual artists—these are not just inputs. They are the training ground for systems that are now capable of producing substitutive outputs that overwhelm creators and flood markets. And until now, the White House policy conversation has largely treated that massive theft as an acceptable cost of staying ahead led by David Sacks, R Street Institute and the hyperscalers.

What makes this potential executive order interesting is that it suggests a shift away from that posture. If the administration is preparing to meet with China on AI, it has an incentive to show that the United States takes control, governance, and strategic resources seriously. And in this context, creative works start to look less like “free fuel” and more like national assets.

That may matter for artists.

Because once you recognize that AI systems derive value from the signals embedded in creative works—voice, tone, style, expression—you start to see those works differently. They are not just content. They are repositories of identity and cultural value.

And they are being extracted at scale.

A more protective policy framework—whether it focuses on model review, training data standards, or provenance—creates an opening. It creates space for the idea that artists are not just upstream contributors, but stakeholders whose work underpins the entire system.

This doesn’t mean the executive order, if it comes, will solve the problem. It won’t.

But it could mark an inflection point.

If policymakers begin to treat AI not just as a technology race but as a resource competition, then the role of creators becomes harder to ignore. You can’t claim to lead in AI while simultaneously disregarding the human material that makes those systems work.

That contradiction is starting to surface. The industry allowed artists and even copyright itself to be lumped in with zoning boards as “bureaucracy” which in turn allowed David Sacks and his ilk to try to create an alternate universe where “innovation” ran wild to “beat China” while also selling chips to China out the back door.

For artists, the takeaway is simple: pay attention to the shift in tone. Policy signals often precede legal ones. What gets framed as a national priority today can become a regulatory framework tomorrow.

For the first time in a while, there are signs that the conversation may be moving—however slightly—toward recognizing the value that artists bring to the AI ecosystem.

Sacks may not be gone. Silicon Valley rarely loses outright, just look at the MLC. But even a partial shift away from the “move fast and ingest everything” playbook is meaningful.

Because for artists, the question has never been whether AI will be built.

The question is whether it will be built on you or with you.

Meet the New AI Boss, Worse Than the Old Internet Boss

Congress is considering several legislative packages to regulate AI. AI is a system that was launched globally with no safety standards, no threat modeling, and no real oversight. A system that externalized risk onto the public, created enormous security vulnerabilities, and then acted surprised when criminals, hostile states, and bad actors exploited it.

After the damage was done, the same companies that built it told governments not to regulate—because regulation would “stifle innovation.” Instead, they sold us cybersecurity products, compliance frameworks, and risk-management services to fix the problems they created.

Yes, artificial intelligence is a problem. Wait…Oh, no sorry. That’s not AI.

That’s was Internet. And it made the tech bros the richest ruling class in history.

And that’s why some of us are just a little skeptical when the same tech bros are now telling us: “Trust us, this time will be different.” AI will be different, that’s for sure. They’ll get even richer and they’ll rip us off even more this time. Not to mention building small nuclear reactors on government land that we paid for, monopolizing electrical grids that we paid for, and expecting us to fill the landscape with massive power lines that we will pay for.

The topper is that these libertines want no responsibility for anything, and they want to seize control of the levers of government to stop any accountability. But there are some in Congress who are serious about not getting fooled again.

Senator Marsha Blackburn released a summary of legislation she is sponsoring that gives us some cause for hope (read it here courtesy of our friends at the Copyright Alliance). Because her bill might be effective, that means Silicon Valley shills will be all over it to try to water it down and, if at all possible, destroy it. That attack of the shills has already started with Silicon Valley’s AI Viceroy in the Trump White House, a guy you may never have heard of named David Sacks. Know that name. Beware that name.

Senator Blackburn’s bill will do a lot of good things, including for protecting copyright. But the first substantive section of Senator Blackburn’s summary is a game changer. She would establish an obligation on AI platforms to be responsible for known or predictable harm that can befall users of AI products. This is sometimes called a “duty of care.”

Her summary states:

Place a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users. Additionally, this section requires:

• AI platforms to conduct regular risk assessments of how algorithmic systems, engagement mechanics, and data practices contribute to psychological, physical, financial, and exploitative harms.

• The Federal Trade Commission (FTC) to promulgate rules establishing minimum reasonable safeguards.

At its core, Senator Blackburn’s AI bill tries to force tech companies to play by rules that most other industries have followed for decades: if you design a product that predictably harms people, you have a responsibility to fix it.

That idea is called “products liability.” Simply put, it means companies can’t sell dangerous products and then shrug it off when people get hurt. Sounds logical, right? Sounds like what you would expect would happen if you did the bad thing? Car makers have to worry about the famous exploding gas tanks. Toy manufacturers have to worry about choking hazards. Drug companies have to test side effects. Tobacco companies….well, you know the rest. The law doesn’t demand perfection—but it does demand reasonable care and imposes a “duty of care” on companies that put dangerous products into the public.

Blackburn’s bill would apply that same logic to AI platforms. Yes, the special people would have to follow the same rules as everyone else with no safe harbors.

Instead of treating AI systems as abstract “speech” or neutral tools, the bill treats them as what they are: products with design choices. Those choices that can foreseeably cause psychological harm, financial scams, physical danger, or exploitation. Recommendation algorithms, engagement mechanics, and data practices aren’t accidents. They’re engineered. At tremendous expense. One thing you can be sure of is that if Google’s algorithms behave a certain way, it’s not because the engineers ran out of development money. The same is true of ChatGPT, Grok, etc. On a certain level of reality, this is very likely not guess work or predictability. It’s “known” rather than “should have known.” These people know exactly what their algorithms do. And they do it for the money.

The bill would impose that duty of care on AI developers and platform operators. A duty of care is a basic legal obligation to act reasonably to prevent foreseeable harm. “Foreseeable” doesn’t mean you can predict the exact victim or moment—it means you can anticipate the type of harm that flows to users you target from how the system is built.

To make that duty real, the bill would require companies to conduct regular risk assessments and make them public. These aren’t PR exercises. They would have to evaluate how their algorithms, engagement loops, and data use contribute to harms like addiction, manipulation, fraud, harassment, and exploitation.

They do this already, believe it. What’s different is that they don’t make it public, anymore than Ford made public the internal research that the Pinto’s gas tank was likely to explode. In other words, platforms would have to look honestly at what their systems actually do in the world—not just what they claim to do.

The bill also directs the Federal Trade Commission (FTC) to write rules establishing minimum reasonable safeguards. That’s important because it turns a vague obligation (“be responsible”) into enforceable standards (“here’s what you must do at a minimum”). Think of it as seatbelts and crash tests for AI systems.

So why do tech companies object? Because many of them argue that their algorithms are protected by the First Amendment—that regulating how recommendations work is regulating speech. Yes, that is a load of crap. It’s not just you, it really is BS.

Imagine Ford arguing that an exploding gas tank was “expressive conduct”—that drivers chose the Pinto to make a statement, and therefore safety regulation would violate Ford’s free speech rights. No court would take that seriously. A gas tank is not an opinion. It’s an engineered component with known risks and risks that were known to the manufacturer.

AI platforms are the same. When harm flows from design decisions—how content is ranked, how users are nudged, how systems optimize for engagement—that’s not speech. That’s product design. You can measure it, test it, audit it, which they do and make it safer which they don’t.

This part of Senator Blackburn’s bill matters because platform design shapes culture, careers, and livelihoods. Algorithms decide what gets seen, what gets buried, and what gets exploited. Blackburn’s bill doesn’t solve every problem, but it takes an important step: it says tech companies can’t hide dangerous products behind free-speech rhetoric anymore.

If you build it, and it predictably hurts people, you’re responsible for fixing it. That’s not censorship. It’s accountability. And people like Marc Andreessen, Sam Altman, Elon Musk and David Sacks will hate it.

Trump’s Historic Kowtow to Special Interests: Why Trump’s AI Executive Order Is a Threat to Musicians, States, and Democracy

There’s a new dance in Washington—it’s called the KowTow

Most musicians don’t spend their days thinking about executive orders. But if you care about your rights, your recordings, your royalties, or your community, or even the environment, you need to understand the Trump Administration’s new executive order on artificial intelligence. The order—presented as “Ensuring a National Policy Framework for AI”—is not a national standard at all. It is a blueprint for stripping states of their power, protecting Big Tech from accountability, and centralizing AI authority in the hands of unelected political operatives and venture capitalists. In other words, it’s business as usual for the special interests led by an unelected bureaucrat, Silicon Valley Viceroy and billionaire investor David Sacks who the New York Times recently called out as a walking conflict of interest.

You’ll Hear “National AI Standard.” That’s Fake News. IT’s Silicon valley’s wild west

Supporters of the EO claim Trump is “setting a national framework for AI.” Read it yourself. You won’t find a single policy on:
– AI systems stealing copyrights (already proven in court against Anthropic and Meta)
– AI systems inducing self-harm in children
– Whether Google can build a water‑burning data center or nuclear plant next to your neighborhood 

None of that is addressed. Instead, the EO orders the federal government to sue and bully states like Florida and Texas that pass AI safety laws and threatens to cut off broadband funding unless states abandon their democratically enacted protections. They will call this “preemption” which is when federal law overrides conflicting state laws. When Congress (or sometimes a federal agency) occupies a policy area, states lose the ability to enforce different or stricter rules. There is no federal legislation (EOs don’t count), so there can be no “preemption.”

Who Really Wrote This? The Sacks–Thierer Pipeline

This EO reads like it was drafted directly from the talking points of David Sacks and Adam Thierer, the two loudest voices insisting that states must be prohibited from regulating AI.  It sounds that way because it was—Trump himself gave all the credit to David Sacks in his signing ceremony.

– Adam Thierer works at Google’s R Street Institute and pushes “permissionless innovation,” meaning companies should be allowed to harm the public before regulation is allowed. 
– David Sacks is a billionaire Silicon Valley investor from South Africa with hundreds of AI and crypto investments, documented by The New York Times, and stands to profit from deregulation.

Worse, the EO lards itself with references to federal agencies coordinating with the “Special Advisor for AI and Crypto,” who is—yes—David Sacks. That means DOJ, Commerce, Homeland Security, and multiple federal bodies are effectively instructed to route their AI enforcement posture through a private‑sector financier.

The Trump AI Czar—VICEROY Without Senate Confirmation

Sacks is exactly what we have been warning about for months: the unelected Trump AI Czar

He is not Senate‑confirmed. 
He is not subject to conflict‑of‑interest vetting. 
He is a billionaire “special government employee” with vast personal financial stakes in the outcome of AI deregulation. 

Under the Constitution, you cannot assign significant executive authority to someone who never faced Senate scrutiny. Yet the EO repeatedly implies exactly that.

Even Trump’s MOST LOYAL MAGA Allies Know This Is Wrong

Trump signed the order in a closed ceremony with sycophants and tech investors—not musicians, not unions, not parents, not safety experts, not even one Red State governor.

Even political allies and activists like Mike Davis and Steve Bannon blasted the EO for gutting state powers and centralizing authority in Washington while failing to protect creators. When Bannon and Davis are warning you the order goes too far, that tells you everything you need to know. Well, almost everything.

And Then There’s Ted Cruz

On top of everything else, the one state official in the room was U.S. Senator Ted Cruz of Texas, a state that has led on AI protections for consumers. Cruz sold out Texas musicians while gutting the Constitution—knowing full well exactly what he was doing as a former Supreme Court clerk.

Why It Matters for Musicians

AI isn’t some abstract “tech issue.” It’s about who controls your work, your rights, your economic future. Right now:

– AI systems train on our recordings without consent or compensation. 
– Major tech companies use federal power to avoid accountability. 
– The EO protects Silicon Valley elites, not artists, fans or consumers. 

This EO doesn’t protect your music, your rights, or your community. It preempts local protections and hands Big Tech a federal shield.

It’s Not a National Standard — It’s a Power Grab

What’s happening isn’t leadership. It’s *regulatory capture dressed as patriotism*. If musicians, unions, state legislators, and everyday Americans don’t push back, this EO will become a legal weapon used to silence state protections and entrench unaccountable AI power.

What David Sacks and his band of thieves is teaching the world is that he learned from Dot Bomb 1.0—the first time around, they didn’t steal enough. If you’re going to steal, steal all of it. Then the government will protect you.


@ArtistRights Institute Newsletter 11/17/25: Highlights from a fast-moving week in music policy, AI oversight, and artist advocacy.

American Music Fairness Act

Don’t Let Congress Reward the Stations That Don’t Pay Artists (Editor Charlie/Artist Rights Watch)

Trump AI Executive Order

White House drafts order directing Justice Department to sue states that pass AI regulations (Gerrit De Vynck and Nitasha Tiku/Washington Post)

DOJ Authority and the “Because China” Trump AI Executive Order (Chris Castle/MusicTech.Solutions)

THE @DAVIDSACKS/ADAM THIERER EXECUTIVE ORDER CRUSHING PROTECTIVE STATE LAWS ON AI—AND WHY NO ONE SHOULD BE SURPRISED THAT TRUMP TOOK THE BAIT

Bartz Settlement

WHAT $1.5 BILLION GETS YOU:  AN OBJECTOR’S GUIDE TO THE BARTZ SETTLEMENT (Chris Castle/MusicTechPolicy)

Ticketing

StubHub’s First Earnings Faceplant: Why the Ticket Reseller Probably Should Have Stayed Private (Chris Castle/ArtistRightsWatch)

The UK Finally Moves to Ban Above-Face-Value Ticket Resale (Chris Castle/MusicTech.Solutions)

Ashley King: Oasis Praises Victoria’s Strict Anti-Scalping Laws While on Tour in Oz — “We Can Stop Large-Scale Scalping In Its Tracks” (Artist Rights Watch/Digital Music News)

NMPA/Spotify Video Deal

GUEST POST: SHOW US THE TERMS: IMPLICATIONS OF THE SPOTIFY/NMPA DIRECT AUDIOVISUAL LICENSE FOR INDEPENDENT SONGWRITERS (Gwen Seale/MusicTechPolicy)

WHAT WE KNOW—AND DON’T KNOW—ABOUT SPOTIFY AND NMPA’S “OPT-IN” AUDIOVISUAL DEAL (Chris Castle/MusicTechPolicy)