EU and Article 13: The Dystopia That Never Was and Never Will Be

Authors: Stefan Herwig and Lukas Schneider. This  article originally appeared as Upload Filters: The Dystopia Has Been Canceled on the Frankfurter Allgemeine Zeitung here.  German language version here. Translated from German to English by Sarah Swift.
© Frankfurter Allgemeine Zeitung GmbH 2001 – 2019 All Rights Reserved. Reprinted with permission. 

The “Declaration of the Independence of Cyberspace“ published in 1996 by John Perry Barlow begins with the words “Governments of the Industrial World I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.” One reading of this text entirely rejects the possibility that processes of making and enforcing collectively binding decisions – political processes – apply on the Internet. Another possible reading sees the Internet as a public space governed by rules that must be established through democratic process while also holding that certain sub-spaces belong to the private rather than the public sphere. The distinction between public and private affairs, res publicae und res privata, is essential for the functioning of social spaces. The concept of the “res publicae” as “space concerning us all”  led – and not only etymologically – to the idea of the republic as a form of statehood and, later, as a legitimate space for democratic policymaking.

On the Internet, this essential separation of private and public space has been utterly undermined, and the dividing lines between public and private spaces are becoming ever more blurred. We now have public spaces lacking in enforcement mechanisms and transparency and private spaces inadequately protected from surveillance and the misuse of data. Data protection is one obvious field this conflict is playing out on, and copyright is another.

The new EU Directive on Copyright seeks to establish democratic rules governing the public dissemination of works. Its detractors have not only been vociferous – they have also resorted to misleading forms of framing. The concepts of upload filters, censorship machines and link taxes have been injected into the discussion. They are based on false premises.

Upload filters as cogs in a censorship machine

What campaigners against copyright reform term “upload filters” are not invariably filters with a blocking function; they can be simple identification systems. Content can be scanned at the time of uploading to compare it to patterns from other known content. Such a system could, for example, recognize Aloe Blacc’s retro-soul hit “I need a Dollar.” Such software systems can be compared to dictation software capable of identifying the spoken words in audio files. At this point in time, systems that can identify music tracks on the basis of moderately noisy audio signals can be programed as coursework projects by fourth-semester students drawing on open-source code libraries. Stylizing such systems as prohibitively expensive or as a kind of “alien technology” underestimates both the dystopian potential of advanced pattern recognition systems (in common parlance: artificial intelligence) in surveillance software and similar use cases while also underestimating the feasibility of programming legitimate and helpful systems. The music discovery app “Shazam,” to take a specific example, was created by a startup with only a handful of developers and a modest budget and is now available on millions of smartphones and tablets – for free. The myth that only tech giants can afford such systems is false, as the example of Shazam or of enterprises like Audible Magic shows. Identifying works is a basic prerequisite for a reformed copyright regime, and large platforms will not be able to avoid doing so. Without an identification process in place, the use of licensed works cannot be matched to license holders. Such systems are, however, not filters.

How do upload filters work?

The principal argument of critics intent on frustrating digital copyright reforms that had already appeared to be on the home stretch is their charge that the disproportionate blocking of uploads would represent a wholesale assault on freedom of speech or, indeed, a form of censorship. Here, too, it is necessary to look more closely at the feasibility and potential of available options for monitoring uploads – and especially to consider the degree of efficiency that can be achieved by linking human and automated monitoring. In a first step, identification systems could automatically block secure matches or allow them to pass by comparing them against data supplied by collecting societies. Licensed content could readily be uploaded and its use would be electronically registered. Collecting societies would distribute license revenue raised to originators and artists. Non-licensed uses could automatically be blocked. In a second step, errors could be caught through a complaints handling system making decisions on whether complaints are justified on the basis of human analysis – this would represent a clear improvement on the current procedures used by YouTube and Facebook. What automated pattern recognition systems cannot do is determine the meaning of content items at the semantic level. This means that they cannot identify legitimate uses of protected works – in the context of parodies or mash-ups, say, or when an image is reproduced online in a piece of art criticism. In such cases, the identification system would report a “fuzzy”  match, stating for example that 40% of a given upload corresponded to a copyrighted file known from the database. To achieve a legally watertight result here, human judgment would be required. Humans can recognize parodies or incidental uses such as purely decorative uses of works in ways that that do not constitute breaches of copyright.

The process of analysis could be simplified further by uploaders stating the context of use at the time works are uploaded. Notes such as “This video contains a parody and/or uses a copyrighted work for decorative purposes” could be helpful to analysts. The Network Enforcement Act (NetzDG) in Germany provides a good example of how automatic recognition and human analysis can work in tandem to analyze vast volumes of information. A few hundred people in Germany are currently tasked with deciding whether statements made on Facebook constitute incitement to hatred and violence against certain groups or are otherwise in breach of community rules. These judgments are significantly more complex than detecting impermissible uses of copyrighted works.

Being obliged to implement human monitoring will, of course, impose certain demands on platforms. But those most affected will be the platforms with the largest number of uploads. These major platforms will have the highest personnel requirements because they can host content of almost every kind: music, texts, video etc. Protecting sites like a small photo forum will be much simpler. If only a modest number of uploads is involved, the forum operator can easily check them personally at the end of the working day. In that case, uploaders will simply have to wait for a brief period for their content to appear online. Or operators can opt to engage a service center like Acamar instead of adding these checks to their own workloads. Efficient monitoring is possible.

An additional misinterpretation propagated by campaigners against copyright reform is that platforms will have to take out licenses for all the content in the world from a near-infinite number of licensing partners. This, too, is inaccurate, since the transfer of liability to platforms only arises in cases in which rightsholders have specifically prohibited the unlicensed use of their works and had the works in question added to a database made available to platform operators through collecting societies. Visions of upload filters leading to dystopian censorship are, it follows, unfounded. This should be clear to anybody who has read the text of the directive and has even a basic working knowledge of informatics.

For a free Internet, we need copyright reform

The reform provides a basis for ensuring artists are fairly remunerated for their work and forces rightsholders to assist in the identification of works by registering their content in databases. Both effects are highly advantageous for users. Under the proposed regime, somebody who wants to use the Rage Against The Machines track “Killing In The Name Of”  on the soundtrack of a protest video will no longer have to worry about copyright and can simply upload the video to a platform. Works used will be identified, and the relevant collecting societies will distribute the licensing revenue they receive from the platform. If Rage Against The Machine has objections to the transformative use of their work, they can communicate them to the database. Once the directive has been transposed into national law, this procedure will become standard practice.

It will become possible to publish more content and easier to comply with the law. All of this will contribute to more freedom on the Internet – the kind of freedom that stems from having democratic rules rather than allowing tech giants with their community rules and automated decision-making processes to determine what content is permitted on their platforms and what they are prepared to pay for it.

Barlow overlooked what the author William Gibson had already recognized – that enterprises, when they make the rules, can become more powerful than states. The rejection of state-guaranteed democratic rules creates the power vacuum required for this to happen. This is the wider context that explains why copyright reform is but one battlefield in the struggle for political power on the Internet. YouTube should not be allowed to become the Internet for videos just as Google has practically already become the only filter for web searches. Amazon should not be allowed to evolve from a vendor in the market to the provider of the market and to dictate the earnings of parcel couriers. Rules in digital space must be created and weighed against each other by democratic means and not in an arbitrary fashion dependent solely on who wields the most market power. Artists and net activists should fight this battle together, because the Internet is not some abstract parallel dimension: its data flows determine our creditworthiness just as they supply us with holiday pictures and pervade every aspect of modern life. If we relinquish democratic control over this public space, we will become subject to the despotic rule of neoliberal tech giants, and not merely on the Internet.

Barlow’s manifesto ends with the words: “We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.” Copyright reform will take our society one step closer to this aim. The quasi-governments that must now be opposed are called Google, facebook und Amazon. Those who take the side of these giants in this controversy are opposed to the free Internet in the true meaning of the word.

Translation from German by Sarah Swift.

Authors

Stefan Herwig and Lukas Schneider jointly run a think tank, Mindbase, that tackles questions of Internet policy with academic rigor. Stefan Herwig works in the music industry and advises politicians and enterprises on digital policy issues. Lukas Schneider is an information science expert and a musician and is active in Germany’s Green party (Alliance ’90/The Greens).