The AI Safe Harbor is an Unconstitutional Violation of State Protections for Families and Consumers

By Chris Castle

The AI safe harbor slavered onto President Trump’s “big beautiful bill” is layered with intended consequences. Not the least of these is the affect on TikTok.

One of the more debased aspects of TikTok (and that’s a long list) is their promotion through their AI driven algorithms of clearly risky behavior to their pre-teen audience. Don’t forget: TikTok’s algorithm is not just any algorithm. The Chinese government claims it as a state secret. And when the CCP claims a state secret they ain’t playing. So keep that in mind.

One of these risky algorithms that was particularly depraved was called the “Blackout Challenge.” The TikTok “blackout challenge” has been linked to the deaths of at least 20 children over an 18-month period. One of the dead children was Nylah Anderson. Nylah’s mom sued TikTok for her daughter because that’s what moms do. If you’ve ever had someone you love hang themselves, you will no doubt agree that you live with that memory every day of your life. This unspeakable tragedy will haunt Nylah’s mother forever.

Even lowlifes like TikTok should have settled this case and it should never have gotten in front of a judge. But no–TikTok tried to get out of it because Section 230. Yes, that’s right–they killed a child and tried to get out of the responsibility. The District Court ruled that the loathsome Section 230 applied and Nylah’s mom could not pursue her claims. She appealed.

The Third Circuit Court of Appeals reversed and remanded, concluding that “Section 230 immunizes only information ‘provided by another’” and that “here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”

So…a new federal proposal threatens to slam the door on these legal efforts: the 10-year artificial intelligence (AI) safe harbor recently introduced in the House Energy and Commerce Committee. If enacted, this safe harbor would preempt state regulation of AI systems—including the very algorithms and recommendation engines that Nylah’s mom and other families are trying to challenge. 

Section 43201(c) of the “Big Beautiful Bill” includes pork, Silicon Valley style, entitled the “Artificial Intelligence and Information Technology Modernization Initiative: Moratorium,” which states:

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.

The “Initiative” also appropriates “$500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems….” So not only did Big Tech write themselves a safe harbor for their crimes, they also are taking $500,000,000 of corporate welfare to underwrite it courtesy of the very taxpayers they are screwing over. Step aside Sophocles, when it comes to tragic flaws, Oedipus Rex got nothing on these characters.

Platforms like TikTok, YouTube, and Instagram use AI-based recommendation engines to personalize and optimize content delivery. These systems decide what users see based on a combination of behavioral data, engagement metrics, and predictive algorithms. While effective for keeping users engaged, these AI systems have been implicated in promoting harmful content—ranging from pro-suicide material to dangerous ‘challenges’ that have directly resulted in injury or death.

Families across the country have sued these companies, alleging that the AI-driven algorithms knowingly promoted hazardous content to vulnerable users. In many cases, the claims are based on state consumer protection laws, negligence, or wrongful death statutes. Plaintiffs argue that the companies failed in their duty to design safe systems or to warn users about foreseeable dangers. These cases are not attacks on free speech or user-generated content; they focus specifically on the design and operation of proprietary AI systems. 

If you don’t think that these platforms are depraved enough to actually raise safe harbor defenses, just remember what they did to Nylah’s mom–raised the exceptionally depraved Section 230 as a defense to their responsibility in the death of a child.

The AI safe harbor would prohibit states from enacting or enforcing any law that regulates AI systems or automated decision-making technologies for the next 10 years. This sweeping language could easily be interpreted to cover civil liability statutes that hold platforms accountable for the harms their AI systems cause. This is actually even worse than the vile Section 230–the safe harbor would be expressly targeting actual state laws. Maybe after all the appeals, say 20 years from now, we’ll find out that the AI safe harbor is unconstitutional commandeering, but do we really want to wait to find out?

Because these wrongful death lawsuits rely on arguments that an AI algorithm caused harm—either through its design or its predictive content delivery—the companies could argue that the moratorium shields them from liability. They might claim that the state tort claims are an attempt to “regulate” AI in violation of the federal preemption clause. If courts agree, these lawsuits could be dismissed before ever reaching a jury.

This would create a stunning form of corporate immunity even beyond the many current safe harbors for Big Tech: tech companies would be free to deploy powerful, profit-driven AI systems with no accountability in state courts, even when those systems lead directly to preventable deaths. 

The safe harbor would be especially devastating for families who have already suffered tragic losses and are seeking justice. These families rely on state wrongful death laws to hold powerful platforms accountable. Removing that path to accountability would not only deny them closure, but also prevent public scrutiny of the algorithms at the center of these tragedies.

States have long held the authority to define standards of care and impose civil liability for harms caused by negligence or defective products. The moratorium undermines this traditional role by barring states from addressing the specific risks posed by AI systems, even in the context of established tort principles. It would represent one of the broadest federal preemptions of state law in modern history—in the absence of federal regulation of AI platforms.

• In Pennsylvania, the parents of a teenager who committed suicide alleged that Instagram’s algorithmic feed trapped their child in a cycle of depressive content.
• Multiple lawsuits filed under consumer protection and negligence statutes in states like New Jersey, Florida, and Texas seek to hold platforms accountable for designing algorithms that systematically prioritize engagement over safety.
• TikTok faced multiple class action multidistrict litigation claims it illegally harvested user information from its in-app browser.

All of such suits could be in jeopardy if courts interpret the AI moratorium as barring state laws that impose liability on algorithm-driven systems and you can bet that Big Tech platforms will litigate the bejeezus out of the issue. Even if the moratorium was not intended to block wrongful death and other state law claims, its language may be broad enough to do so in practice—especially when leveraged by well-funded corporate legal teams.

Even supporters of federal AI regulation should be alarmed by the breadth of this safe harbor. It is not a thoughtful national framework based on a full record, but a shoot-from-the-hip blanket prohibition on consumer protection and civil justice. By freezing all state-level responses to AI harms, the AI safe harbor is intent on consolidating power in the hands of federal bureaucrats and corporate lobbyists, leaving ordinary Americans with fewer options for recourse, not to mention a clear violation of state police powers and the 10th Amendment.

To add insult to injury, the use of reconciliation to pass this policy—without full hearings, bipartisan debate, or robust public input—only underscores the cynical nature of the strategy. It has nothing to do with the budget aside from the fact that Big Tech is snarfing down $500 million of taxpayer money for no good reason just so they can argue their land grab is “germane” to shoehorn it into reconciliation under the Byrd Rule. It’s a maneuver designed to avoid scrutiny and silence dissent, not to foster a responsible or democratic conversation about how AI should be governed.

At its core, the AI safe harbor is not about fostering innovation—it is about shielding tech platforms from accountability just like the DMCA, Section 230 and Title I of the Music Modernization Act. By preempting state regulation, it could block families from using long-standing wrongful death statutes to seek justice for the loss of their children and laws protecting Americans from other harms. It undermines the sovereignty of states, the dignity of grieving families, and the public’s ability to scrutinize the AI systems that increasingly shape our lives. 

Congress must reject this overreach, and the American public must remain vigilant in demanding transparency, accountability, and justice. The Initiative must go.

[A version of this post first appeared on MusicTechPolicy]

Now with added retroactive acrobatics: @DamianCollins calls on UK Prime Minister to stop Google’s “Text and Data Mining” Circus

By Chris Castle

Damian Collins (former chair of the UK Parliament’s Digital Culture Media and Sport Select Committee) warns of Google’s latest artificial intelligence shenanigans in a must-read opinion piece in the Daily Mail. Mr. Collins highlights Google’s attempt to lobby its way into what is essentially a retroactive safe harbor to protect Google and its confederates in the AI land grab. (Safe harbors aka pirate utopias.)

While Mr. Collins writes about Google’s efforts to rewrite the laws of the UK to free ride in his home country which is egregious bullying, the episode he documents is instructive for all of us. If Google & Co. will do it to the Mother of Parliaments, it’s only a matter of time until Google & Co. do the same everywhere or know the reason why. Their goal is to hoover up all the world’s culture that the AI platforms have not scraped already and–crucially–to get away with it. And as Austin songwriter Guy Forsyth says, “…nothing says freedom like getting away with it.”

The timeline of AI’s appropriation of all the world’s culture is a critical understanding to appreciate just how depraved Big Tech’s unbridled greed really is. The important thing to remember is that AI platforms like Google have been scraping the Internet to train their AI for some time now, possibly many years. This apparently includes social media platforms they control. My theory is that Google Books was an early effort at digitization for large language models to support products like corpus machine translation as a predecessor to Gemini (“your twin”) and other Google AI products. We should ask Ray Kurzweil.

There is starting to be increasing evidence that this is exactly what these people are up to. 

The New York Times Uncovers the Crimes

According to an extensive long-form report in the New York Times by a team of very highly respected journalists, it turns out that Google has been planning this “Text and Data Mining” land grab for some time. At the very moment YouTube was issuing press releases about their Music AI Incubator and their “partners”–Google was stealing anything that was not nailed down that anyone had hosted on their massive platforms, including Google Docs, Google Maps, and…YouTube. The Times tells us:

Google transcribed YouTube videos to harvest text for its A.I. models, five people with knowledge of the company’s practices said. That potentially violated the copyrights to the videos, which belong to their creators….Google said that its A.I. models “are trained on some YouTube content,” which was allowed under agreements with YouTube creators, and that the company did not use data from office apps outside of an experimental program. 

I find it hard to believe that YouTube was both allowed to transcribe and scrape under all its content deals, or that they parsed through all videos to find the unprotected ones that fall victim to Google’s interpretation of the YouTube terms of use. So as we say in Texas, that sounds like bullshit for starters. 

How does this relate to the Text and Data Mining exception that Mr. Collins warns of? Note that the NYT tells us “Google transcribed YouTube videos to harvest text.” That’s a clue.

As Mr. Collins tells us: 

Google [recently] published a policy paper entitled: Unlocking The UK’s AI Potential.

What’s not to like?, you might ask. Artificial intelligence has the potential to revolutionise our economy and we don’t want to be left behind as the rest of the world embraces its benefits.

But buried in Google’s report is a call for a ‘text and data mining’ (TDM) exception to copyright. 

This TDM exception would allow Google to scrape the entire history of human creativity from the internet without permission and without payment.

And, of course, Mr. Collins is exactly correct, it’s safe to assume that’s exactly what Google have in mind. 

The Conspiracy of Dunces and the YouTube Fraud

In fairness, it wasn’t just Google ripping us off, but Google didn’t do anything to stop it as far as I can tell. One thing to remember is that YouTube was, and I think still is, not very crawlable by outsiders. It is almost certainly the case that Google would know who was crawling youtube.com, such as Bingbot, DuckDuckBot, Yandex Bot, or Yahoo Slurp if for no other reason that those spiders were not googlebot. With that understanding, the Times also tells us:

OpenAI researchers created a speech recognition tool called Whisper. It could transcribe the audio from YouTube videos, yielding new conversational text that would make an A.I. system smarter.

Some OpenAI employees discussed how such a move might go against YouTube’s rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are “independent” of the video platform. [Whatever “independent” means.]

Ultimately, an OpenAI team transcribed more than one million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI’s president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4, which was widely considered one of the world’s most powerful A.I. models and was the basis of the latest version of the ChatGPT chatbot….

OpenAI eventually made Whisper, the speech recognition tool, to transcribe YouTube videos and podcasts, six people said. But YouTube prohibits people from not only using its videos for “independent” applications, but also accessing its videos by “any automated means (such as robots, botnets or scrapers).” [And yet it happened…]

OpenAI employees knew they were wading into a legal gray area, the people said, but believed that training A.I. with the videos was fair use. [Or could they have paid for the privilege?]

And strangely enough, many of the AI platforms sued by creators raise “fair use” as a defense (if not all of the cases) which is strangely reminiscent of the kind of crap we have been hearing from these people since 1999.

Now why might Google have permitted OpenAI to crawl YouTube and transcribe videos (and who knows what else)? Probably because Google was doing the same thing. In fact, the Times tells us:

Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn’t stop OpenAI because Google had also used transcripts of YouTube videos to train its A.I. models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said.

So Google and its confederate OpenAI may well have conspired to commit massive copyright infringement against the owner of a valid copyright, did so willingly, and for purposes of commercial advantage and private financial gain. (Attempts to infringe are prohibited to the same extent as the completed act). The acts of these confederates vastly exceed the limits for criminal prosecution for both infringement and conspiracy.

But to Mr. Collins’ concern, the big AI platforms transcribed likely billions of hours of YouTube videos to manipulate text and data–you know, TDM.

The New Retroactive Safe HarborThe Flying Googles Bring their TDM Circus Act to the Big Tent With Retroactive Acrobatics

But also realize the effect of the new TDM exception that Google and their Big Tech confederates are trying to slip past the UK government (and our own for that matter). A lot of the discussion about AI rulemaking acts as if new rules would be for future AI data scraping. Au contraire mes amis–on the contrary, the bad acts have already happened and they happened on an unimaginable scale.

So what Google is actually trying to do is get the UK to pass a retroactive safe harbor that would deprive citizens of valuable property rights–and also pass a prospective safe harbor so they can keep doing the bad acts with impunity.

Fortunately for UK citizens, the UK Parliament has not passed idiotic retroactive safe harbor legislation like the U.S. Congress has. I am, of course, thinking of the vaunted Music Modernization Act (MMA) that drooled its way to a retroactive safe harbor for copyright infringement, a shining example of the triumph of corruption that has yet to be properly challenged in the US on Constitutional grounds. 

There’s nothing like the MMA absurdity in the UK, at least not yet. However, that retroactive safe harbor was not lost on Google, who benefited directly from it. They loved it. They hung it over the mantle next to their other Big Game trophy, the DMCA. And now they’d like to do it again for the triptych of legislative taxidermy.

Because make no mistake–a retroactive safe harbor would be exactly the effect of Google’s TDM exception. Not to mention it would also be a form of retroactive eminent domain, or what the UK analogously might call the compulsory purchase of property under the Compulsory Purchase of Property Act. Well…”purchase” might be too strong a word, more like “transfer” because these people don’t intend to pay for a thing.

The effect of passing Google’s TDM exception would be to take property rights and other personal rights from UK citizens without anything like the level of process or compensation required under the Compulsory Purchase of Property–even when the government requires the sale of private property to another private entity (such as a railroad right of way or a utility easement).

The government is on very shaky ground with a TDM exception imposed by the government for the benefit of a private company, indeed foreign private companies who can well afford to pay for it. It would be missing government oversight on a case-by-base basis, no proper valuation, and for entirely commercial purposes with no public benefit. In the US, it would likely violate the Takings Clause of our Constitution, among other things.

It’s Not Just the Artists

Mr. Collins also makes a very important point that might get lost among the stars–it’s not just the stars that AI is ripping off–it is everyone. As the New York Times story points out (and it seems that there’s more whistleblowers on this point every day), the AI platforms are hoovering up EVERYTHING that is on the Internet, especially on their affiliated platforms. That includes baby videos, influencers, everything.

This is why it is cultural appropriation on a grand scale, indeed a scale of depravity that we haven’t seen since the Nurenberg Trials. A TDM exception would harm all Britons in one massive offshoring of British culture.

[This post first appeared on MusicTech.Solutions]

Copyright Office Regulates The MLC: Selected Public Comments on the Copyright Office Black Box Study: The DLC Spills the Beans, Part I

We once had a mechanical licensing system in the U.S. that worked well enough for songwriters for 100 years.  The problem with the mechanical licensing system wasn’t so much the licensing function it was the royalty rate.  The government held down songwriters for 70 years to a 1909-based royalty rate that for some reason was frozen in time (more on frozen mechanicals here).  But if users failed to license, songwriters could at least sue for statutory damages.

After the Music Modernization Act passed in 2018, they managed to even give away songwriters’ rights to sue.  The songwriter part of the three-part MMA is called “Title I” and that’s the part that gave away the one hammer that songwriters had to be heard when their rights were infringed.  They called it the “limitation on liability” and it was retroactive to January 1, 2018—before the bill was actually passed by Congress and signed into law.

It’s entirely possible that even if you knew about the MMA, you didn’t know about this new safe harbor created by the same uber-rich companies that wrote themselves the DMCA safe harbor that has created the value gap and plagued artists for years and the “Section 230” safe harbor in the “Communications Decency Act” that services use to profit from human trafficking and revenge porn stalkers.  And now there’s the MMA safe harbor.

Only a handful of insiders got to be at the table when they gave away your rights in Title I without your even knowing what they were up to.  Don’t get us wrong, there are great things in the other parts of MMA dealing with closing the pre-72 loophole, some important changes to the rules for ASCAP and BMI with rate courts, and the fix for producers getting a fair share of SoundExchange royalties.  These are all good things.

The part that sucks is Title I that created this new safe harbor give away that will bedevil songwriters for generations to come.

So you may be asking how do we know this?  Since the so-called “negotiations” for the Title I give away happened behind closed doors, how do we even know what happened?  The answer is that we didn’t have the proof because anyone who tried to offer constructive criticism to the “negotiators” for songwriters was menaced, threatened and stabbed in the back.  Nobody was talking about the safe harbor give away.

But now we do have the proof courtesy of the music services representative at the “Digital Licensee Coordinator” who opened the kimono in their recent comments to the Copyright Office about the black box.  (Read the entire DLC comment here.)  Their comments make for quite a read, not only about the so-called “negotiations” by the unrepresentatives of songwriters but also about the run-up to the MMA in the private settlements that nobody sees.

The first issue is that the Copyright Office has proposed some well-meaning regulations to increase the likelihood that the black box will actually get paid to the songwriters who earned the money.  The services seem to be all in a huff about rules applying retroactively when they’ve been using old rules to organize their data.  You know, they don’t like this retroactive thing unless it’s a retroactive expansion of their safe harbor.  Then they like it just fine.

“The DLC emphatically opposes the Office’s proposal to retroactively expand the required reporting of sound recording and musical work information beyond that which is required by the existing regulations in 37 C.F.R. § 210.20. Those regulations were issued in interim form in December 2018, and finalized in March 2019, and unambiguously required collection of reporting information under the existing monthly statement of account regulations in 37 C.F.R. § 210.16. The Office has now proposed, in paragraph (e) of the proposed rule, to change the required reporting elements for the individual tracks, nearly two years after the MMA’s enactment and months before cumulative statements of account are due to be served.”

Sorry, but we think that the richest companies in commercial history, with trillions and trillions of dollars in market capitalization and the most advanced data mining capability in the known universe, can manage to figure out how to pay songwriters in a way that will actually result in songwriters getting paid. The truth is that they are so used to screwing songwriters that they are not going to lift a finger to help beyond the absolute minimum they have to do.

They got their retroactive safe harbor to give away, so don’t come whinging about retroactivity if it makes the distributions more likely to get to the right person, something the services have uniformly failed to do from their founding.

But now it gets interesting.

“It is well-known that—prior to enactment of the MMA—a number of DMPs entered into industry-wide royalty distribution agreements under the auspices of the NMPA, structured to allow all unmatched works to be claimed by their owners and all accrued royalties to be paid out, in what became the model for the MMA. These agreements were designed to, and did, put tens of millions of dollars in statutory royalties in the hands of copyright owners—money that they had been unable to access due to the broken pre-MMA statutory royalty system.”

First of all—“money that they had been unable to access due to the broken pre-MMA statutory royalty system” is utter crap.  The reason that services didn’t pay out is because they didn’t clear the songs but exploited them anyway.  For example, that’s also why Spotify got sued so many times and is still getting sued.  It’s not that the system was broken, it’s that the services didn’t care and handled licensing in an incompetent manner. In case you missed it, that’s what they want to keep doing by extending into the future the same sloppy practices they got sued for in the past.  The only thing new and improved about it is their absurd and undeserved safe harbor.

We don’t know what these “industry-wide royalty distribution agreements” were all about, but one thing we know for sure is that they weren’t “industry-wide” and the NMPA wouldn’t have had the authority to make those deals “industry-wide” in the first place.  “Industry-wide” seems to mean “with the major publishers” or with NMPA members or just plain insiders.  The implication is that “industry-wide” means everyone, which it clearly does not and cannot if you think about it for 30 seconds.

And if the copyright owners were owed a payment with their own money, the only reason that they couldn’t “access” the funds is that the services wouldn’t let them.  When you owe somebody money, you should pay them because you owe them, not act like you’re doing them a favor.

But here it comes:

Congress in the MMA’s limitation on liability provision enacted a compromise among stakeholders’ interests: elimination of the uncertainty of litigation facing DMPs in exchange for the transfer of accrued royalties to the MLC.

In other words, the services sat on the money and refused to pay until they got the MMA safe harbor.  That was the “trade”—do something the services were already required to do in return for something the songwriters were never obligated to do.  The songwriters paid for the safe harbor with their own money.

“As set forth in the relevant statutory provision, in exchange for payment of accrued royalties from “unmatched” usage prior to license availability date (and related reporting), DMPs are protected from the full brunt of copyright damages in any infringement lawsuits based on alleged failures to comply with the requirements of the prior mechanical licensing regime. The provision provides a clean slate for any past failures under the prior licensing regime for those DMPs who pay those back royalties and provide associated reporting. It provides requirements for DMPs that seek to take advantage of the limitation on liability, ensuring that DMPs that pay accrued royalties to the MLC can do so without having to second-guess whether the payment was worth it—that is, whether they qualify for the limitation.

This was the heart of the deal struck by the stakeholders in crafting the MMA: to provide legal certainty for DMPs, through a limitation on liability, in exchange for the transfer of accrued royalties.

Which “stakeholders” were these?  Did they include any of the plaintiffs who were then suing the services?  No.  Did they include anyone who didn’t drink the Kool-Aid?  No.

So let’s be clear—the reason that the services deigned to actually pay money they owed for failing to license properly is because they didn’t want to be sued for screwing up.  They wanted a vig of a new safe harbor, and as the DLC tells us very, very clearly this issue was at the core of the deal you didn’t make for Title I.

More in Part II

 

 

 

 

SOUTH AFRICA PETITION AGAINST THE SIGNING INTO LAW, THE CURRENT AMENDMENTS TO THE COPYRIGHT ACT No. 98 of 1978 BY HONORABLE PRESIDENT RAMAPHOSA

8561666D-E59E-406C-B71C-109C517A0676

More skullduggery afoot from Google, this time in South Africa–and that’s the fact.  Minister Rob Davies and the Chair of the Portfolio Committee of the National Assembly’s Department of Trade & Industry both need to be called out on exactly how this legislation came to so closely resemble Google’s marching orders on safe harbors and pirate utopias.  As we’ve seen in Europe, Google has no respect for the nation state or local creators.

Sign the petition here and stand shoulder to shoulder with artists in South Africa against Big Tech’s lobbying onslaught.

Guest Post: The TAZ, Pirate Utopias and YouTube’s Obsession with Safe Harbors

Guest post by Chris Castle

“[A]s you begin to act in harmony with nature the Law garottes & strangles you – so don’t play the blessed liberal middleclass martyr – accept the fact that you’re a criminal & be prepared to act like one.”

Hakim Bey from “T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism”

YouTube’s CEO Susan Wojcicki is frantically wheeling around Europe this week in a despairing effort to establish a US-style safe harbor in Europe and undermine Article 13, the Copyright Directive for a Digital Single Market.

Let’s understand that the very concept of a safe harbor for YouTube has its roots deep in the pirate utopias of Internet culture–a fact that may get overlooked if you aren’t a student of the Silicon Valley groundwater.

The Value Gap really owes its origins to the anarchist Peter Lamborn Wilson who wrote the seminal text on pirate utopias under the nom de plume“Hakim Bey” entitled “The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism” (1991) or, as it is known perhaps affectionately in hacker circles, simply “TAZ.”  I for one am not quite sure what makes “poetic terrorism” different from unpoetic terrorism, utopian terrorism, anarchic terrorism, or just plain old terrorism, but it may explain why YouTube just can’t bring itself to block terrorist videos before they find an audience.

But the TAZ helps illuminate my own more truncated term for the Value Gap–the alibi. An alibi for a pirate utopia where the pirates run cults called Google and enrich themselves from the prizes they go a-raiding.

In the early days of online piracy there was a fascination with locating servers in some legal meta-dimension that would be outside of the reach of any law enforcement agency. Sealand, for example, captured the imagination of many proto-pirates, but Sealand is a little to clever to put themselves in a position requiring evacuation by the Royal Navy before the shelling begins.  So Sealand was ruled out.

Instead, Google–largely through YouTube–created its own pirate utopia through manipulation of the DMCA safe harbor, one of the worst bills ever passed by the U.S. Congress–and that’s saying something.  Google busily set about establishing legal precedents that would shore up the moat around their precious TAZ.  None of Google’s attacks on government should be surprising–anarchy is in their DNA.  As former Obama White House aide and Internet savant Susan Crawford tells us:

I was brought up and trained in the Internet Age by people who really believed that nation states were on the verge of crumbling…and we could geek around it.  We could avoid it.  These people were irrelevant.

And “these people” were stupid enough to give a safe harbor to protect the TAZ.  Because here’s the truth–the safe harbor that has made Google one of the richest companies in the world while they hoover up the world’s culture actually is the quintessential temporary autonomous zone.  It only exists in a changeable statute and the judicial interpretations of that statute, whether the DMCA or the Copyright Directive.  And like HAL in 2001: A Space Odyssey, they’re not going to allow that disconnection without a fight.

But YouTube’s CEO Susan Wojcicki will not be singing “A Bicycle Built for Two” as she flails about in the disconnect of YouTube.  Her basic argument is that “imposing copyright liability is destructive of value” for “open platforms” like YouTube.  “Open platforms” bear a striking resemblance to the TAZ, yes?  Ms. Wojcicki , of course, purveys a counterintuitive fantasy because unauthorized uses for which copyright liability accrues is what destroys the value of the infringed work.  What Ms. Wojcicki is harping about is how copyright infringement destroys value for YouTubeand its multinational corporate parent, Google.  This is what happens when stock options invade a pirate utopia.

Not only has she got it wrong, but what she is actually whingeing about is the threat posed to her YouTube pirate utopia by the Copyright Directive and the united creative community.  And as HAL might say, the YouTube mission is too important for me to allow you artists to jeopardize it.