At This Point We Have to ask Ourselves: is Google Opposed to Article 13 or the Nation State Itself? PT 3

This is very rough.  I feel an urgency to get this all out to the public. Why? Currently there are at least three major and coordinated efforts by Silicon Valley (well mostly Google) to undermine regulations and authority of national governments.  The EU copyright Directive, The Canadian Copyright Consultation and the Register of Copyrights Bill in US.  I’m publishing, revising and adding additional material in real time in hope that’s people will look at all these efforts from the broadest possible perspective. First two posts are here and here. Part  II has been heavily revised since original publication. 

Active Measures: Cyberturfing

This series examines the dilemma liberal democracies face as Silicon Valley companies, especially the information monopolies like Google and Facebook, exert power in the political realm.  I framed the choice as such:

  • Do technology companies and their allies sit at the apex of power and determine what sort of world we live in? The boundaries and limits of our government, our commerce and our liberties defined by their algorithms and business models?  Government is simply a janitorial service that cleans up the negative externalities.
  • Or do democratic institutions sit at that apex?

In Part I I outlined a basic history of internet exceptionalism, and then noted that when this pernicious notion is combined with techno-determinism you end up with something I call “internet imperialism.” Fundamentally internet imperialism challenges the legitimacy of representative governments and tries to unwind 400 years of the liberal democratic order, by removing vast swaths of human social and commercial activity from purview of institutions legitimized by the consent of the governed.

In Part II I drilled down into the ideas contained in A Declaration of Independence of Cyberspace, and how the adoption of this creed by commercial internet companies was to their commercial advantage (cost shifting negative externalities), and put them on a collision course with the authority of the nation state.

Specifically internet companies currently use both active and passive measures to diminish the authority of the nation state.  I broke them down into four broad categories

  • Cynically pushing a fiction that “cyberspace” has its own geographical space that is outside national geographic boundaries;
  • Intimidating democratically elected officials by activating online mobs, sometimes real but largely artificial (cyberturfing);
  • Spreading disinformation using proxies while simultaneously denying use of such proxies (“little green men“); and
  • Pressuring non-pliant governments, by openly appealing to centrifugal forces that threaten those governments. Including, opposition parties; ultra-nationalist parties; extreme voices on right/left; and even separatist movements (The Catalonian Candidate))

The first method is more or less passive. Internet companies suggest cyberspace is its own geographic space and outside national boundaries. Thus they claim national laws don’t apply. I examined this in great detail in part II.

However the next three measures are what I would term “active measures” whereby technology companies directly (or indirectly through proxies) run information campaigns against governments in order to intimidate officials thus diminishing the scope of governance and permanently damaging the ability of that government to effectively govern virtual territory.  In the framework I am using for this essay (internet imperialism) these measures can be seen as offensive operations that allow internet companies to expand and hold virtual territory.  Sometimes these campaigns go farther (as they did in the EU) and seek to harm these governments outside the venue of cyberspace  by stirring up passions of regional separatists, ultra-nationalists and extreme voices on the right and left that seek to dismantle democratic institutions. Because these last three measures bear more than a passing resemblance to hybrid information warfare I should introduce the concept.

Active Measures: Smells like hybrid information warfare

Hybrid warfare is a military strategy that employs political warfare and blends conventional warfare, irregular warfare and cyberwarfare with other influencing methods, such as fake news, diplomacy and foreign electoral intervention. By combining kinetic operations with subversive efforts, the aggressor intends to avoid attribution or retribution. –Wikipedia Contributors.

While on first brush this may seem rather strong to compare Google interventions in the democratic processes of sovereign nations as warfare, this is largely because most people have a 19th century view of warfare as purely kinetic operations.  In the last 50 years the information component of war has grown in importance.  In the last 10 years it is arguably the most important and effective component of modern warfare.

I’m confident both United States and Russian military thinkers would agree that nations on the periphery of the Russian federation have been yanked in out of the Russian sphere of influence largely via hybrid/information warfare.  Kinetic operations were only used as a last resort.  Think of simmering conflicts in Ukraine and Georgia.  While ISIL has used stunning displays of violence to claim and hold territory, the group largely used this to further psychological campaigns and thus weaken opposing security forces. These forces “melted away” with little kinetic warfare. ISIL was able to expand its territory dramatically with a few thousand fighters.  If you step back from the violence, ISIL has largely conducted an information war.

Kinetic operations are relatively unimportant in modern warfare, so even though Google lacks kinetic elements, the rest of its operations are strikingly similar to the modern techniques used by state actors and terrorist groups.

Remember that a group or nation does not need to gain territory or achieve a clear victory to benefit from hybrid war. Simply weakening “adjacent” nations or opponents may produce tangible benefits.

In the case of Google and other Silicon Valley companies virtual territory is a desirable prize.  Any weakening of a governments will or ability to govern parts of cyberspace is commercially beneficial. Internet platforms like Facebook and Google are at the base of these ecosystems and thus able to monetize most traffic and activity within these virtual colonies. The “larger” these virtual spaces the more revenue these platforms generate.

Weapons of Information Warfare and Google’s Superiority Over EU Parliament: Google campaign against EU Copyright Directive

These are the five key elements of information warfare.  These were summarized from a broader work on information warfare by Martin Libicki(1995)

The key five elements are:

  • Information Collection
  • Information Transport
  • Information Protection
  • Information Manipulation
  • Information Disturbance, Degradation and Denial.

In information warfare the side that is able to dominate in all these weapons categories is impossible to to defeat.  At least without resorting to kinetic operations.

Information Collection

Google by the design has an extraordinary advantage over the EU Parliament.  Its Android OS, Gmail, Website analytics and ad networks give it crucial information on virtually everyone on the planet.  The EU government?  Not so much.

Information Transport 

More than 90% of all web searches are conducted through Google’s search engine.  Also the default mobile search in the Apple iOS is Google. Android is a wholly owned Google product.  Most people click on the first few results. Google does not have to block information to suppress opponent information. They simply down rank links. Look at the screenshots below. These searches were conducted using private browser and VPN to minimize “browsing bubble” effect on search results.

Fig 1 and Figure 2 Search results for Article 13. Because Google controls transport of information, information that supports its commercial interests can be pushed up in rankings. Information that damages its commercial interests can be down-ranked. 

Information Protection

“By scrambling its own messages and unscrambling those of the other side, each side performs the quintessential act of information warfare, protecting its own view of reality while degrading that of the other side.” -Martin Libicki

Information warfare is about distorting reality.  To win in information warfare your reality must win out over opponents reality. Even actual positivist reality.

It’s not just enough to control the information flow. In information warfare a entity must protect information that the other side could use to damage your capabilities.  In the case of Google they are expert at “Google washing” or obscuring damaging information.   There is no better example of this than the Google’s own “transparency report” that confuses opponents searching for “Google Transparency Project.” (Google Transparency Project is generally critical of Google).
Which result would your typical MEP staffer click on? By outranking adversary’s competing information,  Google partially shields  itself from damaging information.  Remember Google controls the Information distribution channel.  Essentially Google through the power of its search engine has the power to encrypt damaging information while decrypting and disseminating information that harms its opponents.


Information Manipulation 

Information manipulation in the context of information warfare is the alteration of information with intent to distort the opponent’s picture of reality. This can be done using a number of technologies, including computer software for editing text, graphics, video, audio, and other information transport forms. Design of the manipulated data is usually done manually so those in command have control over what picture is being presented to the enemy, but the aforementioned technologies are commonly used to make the physical manipulation process faster once content has been decided. – 1999 student paper a graduate student at Carnegie-Mellon University (Megan Burns)

In this instance we see Google proxy (see section on proxies). conduct a three step process, that allows them enormous command and control over information directed at MEPs.

Step 1.  Google proxy creates disinformation and images to share.

Step 2.   Google proxy Open Media (see section on proxies) creates technological tools to distribute the misinformation.

Step 3. Some real constituents use these tools.  But the tools were also used en masse by unknown allied parties.  Tell tale signs of automation were also present; Late night activity, massive retweets of low follower accounts, identical emails etc etc. See here and here. 

Any particular country’s MEPs can be targeted by As images indicate Axel Voss was being automatically targeted when image was captured. This suggests some sort of command and control directing the flow of false information (non-existent outraged mobs) at Voss. This is classic cyberturfing.  MEPs on the receiving end of these tweets, emails and calls imagined they faced real opposition from their constituents.  Turns out they really didn’t.  See dismal Pirate Party rallies opposing article 13.

Information Disturbance, Degradation and Denial.

“Spoofing is a technique used to degrade the quality of the information being sent to the enemy. The enemy’s flow of information is disturbed by the introduction of a “spoof”, or fake message, into that flow. The technique works because it allows you to provide “false information to the targeted competitor’s collection systems to induce this organization to make bad decisions based upon this faulty information,” Cramer 1996

Thousands of tweets against copyright directive have been sent to MEPs from fake twitter accounts,  The United State’s FCC recently faced a similar situation when it turned out that many of the millions of comments it received on net neutrality were fake. oenemy’s flow of information is disturbed by the introduction of a ‘spoof’, or fake message, into that flow.” Zoom out and there is an grander fake message: thousands of constituents are outraged by the copyright directive.

Finally as  evidenced by the automated tweets, targeted specific MEPs at different times. “provide false information to the targeted competitor’s collection systems to induce this organization to make bad decisions based upon this faulty information.” See targeted robo tweets below. 

Hundreds of automated tweets an hour directed at MEP Beatriz Becerra.  Quite surprising since it’s the wee hours of the morning and the frequency of tweets seems to be increasing!?

So you got it?  A Google funded webpage using automated tools to misinform and mislead a member of EU parliament, using what often appears to be fake twitter accounts.  What do we call this?

“Another way to disturb the information being received by one’s opponent is to introduce noise into the frequency they are using. Background noise makes it difficult for the enemy to separate the actual message from the noise.” -Burns 1999

Again see above.  The volume of automated tweets makes it impossible for MEPs to “hear” information that Google does not want them to hear.  Say voices of artists and other creators that might be in favor of the copyright reforms.

“Finally, overloading is technique used to deny information to the enemy in both military and civilian settings. By sending a volume of data to the enemy’s communication system that is too large for it to handle, one causes a crash or severe degradation of the system’s ability to deliver information. The system is so busy dealing with the overload, it is unable to deliver the essential information to those who need it.”

By flooding MEPs with thousands of Spam messages, phone calls and emails, the fake information overwhelms all other information that might be useful to MEPs as they consider this bill.

A similar thing happened to the US FCC in May 2016 as it considered rescinding so called “Title II” authority over “net neutrality.”  Late night TV comedian John Oliver stirred considerable interest in the net neutrality debate (apparently with the help of former Google outside counsel Marvin Ammori). Reports of a DDos attack on the FCC comment system quickly surfaced. This has since been publicly debated. However David Bray the then CIO of the FCC later noted in a Medium post:

When the events of 08 May happened, my quick analysis of the ratio of 35,000 API requests per minute we were receiving per minute, relative to the number of 90,000 comments being filed in the first half of the day, indicated that ratio to be extraordinarily high and lopsided (the Team also relayed that the API requests were continuing to increase, so we were seeing at least 2 million API requests per hour around the middle of the day — yet not a similar number of comments being received). Separate from actual people wanting to comment, I was concerned we were also being spammed by something automated. If this continued, it might deny system resources from actual people wanting to comment on the high-profile issue.

There is no doubt that John Oliver generated hundreds of thousands of comments.  The open question, still hotly debated, is whether this overload of the comment system was automated and if automated was this perpetrated by groups for or against Title II net neutrality authority?  Certainly there is evidence of automated identical submissions from both camps.  Regardless the overloading of the comment system made it difficult for FCC commissioners to “hear” the true voices of their constituents.  The information on the channel was “disturbed,” “degraded,” and “denied.”  The true signal is not distinguishable from the noise.  This ultimately is the point of Cyberturfing.  Impose a manufactured online mob “signal” over the actual signal from constituents. This doesn’t exclude the possibility that an overwhelming majority of the constituents agree with the cyberturfed signal. The problem is that it’s a fake signal and it disturbs the relationship between elected officials constituents. Jam the channel that carries “consent of the governed”  And obviously this sort fakery can be used in all sorts of nefarious ways to undermine the proper functioning of representative governments.

For instance discouraging elected officials from imposing regulations in domains that internet firms regard as their virtual territory (internet imperialism).

This is what appears to be happening in the EU as they debate the Copyright Directive.

Part four we will look at the use of proxies (“little green men”) to deliver disturbed and degraded information.