What would Lars say? Artificial Intelligence: Nobel or RICO?

All the true promise of AI does not require violating writers, artists, photographers, voice actors etc copyrights and rights of publicity. You know, stuff like reading MRIs and X-rays, developing pharmaceuticals, advanced compounds, new industrial processes, etc.

All the shitty aspects of AI DO require intentional mass copyright infringement (a RICO predicate BTW). You know stuff like bots, deep fakes, autogenerated “yoga mat” music, SEO manipulation, autogenerated sports coverage, commercial chat bots, fake student papers, graphic artist knockoffs, robot voice actors etc. But that’s where the no-value-add-parasitic-free-rider-easy-money is to be made. That’s why the parasitic free-riding VCs and private equity want to get a “fair use” copyright exemption.

Policy makers should understand that if they want to reduce the potential harms of AI they need to protect and reinforce intellectual property rights of individuals. It is a natural (and already existing) brake on harmful AI. What we don’t need is legislative intervention that makes it easier to infringe IP rights and then try to mitigate (the easily predictable and obvious) harms with additional regulation.

This is what happened with Napster and internet 1.0. The DMCA copyright infringement safe harbor for platforms unleashed all sorts of negative externalities that were never fairly mitigated by subsequent regulation.

Why do songwriters get 0.0009 a stream on streaming platforms? Because the platforms used the threat of the DMCA copyright safe harbor by “bad actors” (often connected to the “good actors” via shared board members and investors*) to create a market failure that destroyed the value of songs. To “fix” the problem federal legislation tasks the Copyright Royalty Board in LOC to set royalty rates and forced songwriters to license to the digital platforms (songwriters can not opt out). The royalty setting process was inevitably captured by the tech companies and that’s how you end up with 0.0009 per stream.

TBF the DMCA safe harbor requires the platforms to set up “technical measures” to prevent unlicensed use of copyrights, but this part of the DMCA safe harbor were never implemented and the federal government never bothered to enforce this part of the law. This is the Napster playbook all over again.

1. Unleash a technology that you know will be exploited by bad actors**.

2. Ask for federal intervention that essentially legalizes the infringing behavior.

3. The federal legislation effectively creates private monopoly or duopoly.

4. Trillions of dollars in wealth transferred from creators to a tiny cabal of no-value-add-parasitic-free-rider-easy-money VCs in silicon valley.

5. Lots of handwringing about the plight of creators.

6. Bullshit legislation that claims to help creators but actually mandates a below market rate for creators.

The funny thing is Lars Ulrich was right about Napster. [See our 2012 post Lars Was First and Lars Was Right.] At the time he was vilified by what in reality was a coordinated DC communication firm (working for Silicon Valley VCs) that masqueraded as grassroots operation.

But go back and watch the Charlie Rose debate between Lars Ulrich and Chuck D, everything Lars Ulrich said was gonna happen happened.

If Lars Ulrich hadn’t been cowed by a coordinated campaign by no-value-add-parasitic-free-rider-easy-money Silicon Valley VCs, he’d probably say the same thing about AI.

And he’d be right again.

2 thoughts on “What would Lars say? Artificial Intelligence: Nobel or RICO?

  1. THE WEAPONIZATION OF LANGUAGE (AGAIN) BY SILICON VALLEY,
    THIS TIME REGARDING ARTIFICIAL INTELLIGENCE

    Charles J. Sanders
    Music Attorney

    While some may regard it as a triviality, the choice of a particular word to describe a process under governmental scrutiny is quite important, and the billionaires of Silicon Valley understand that reality well. Perhaps as well as the great satirist, George Orwell, except that Silicon Valley isn’t engaged in artistic irony a la Animal Farm. Its goals are purely commercial, and abetted by linguistic propaganda that is constantly being popularized at the expense of American and global creators.

    The recent, initial “listening session” at the US Copyright Office in 2023 to discuss the tension between generative artificial intelligence and the rights of human creators was a disconcerting example of this technique. Throughout the discussions, our friends from Big Tech over and again persisted in falsely labeling as “training” the loading of massive numbers of unlicensed, copyrighted works into computer systems for the purposes of allowing generative AI technology to “create” (i.e., algorithmically manufacture) derivatives. This misnomer is clearly part of a larger Silicon Valley strategy to anthropomorphize machines into entities with “rights,” so that the creators and owners of such machines may limit the ability of human creators to be credited and compensated for the use of their works.

    For those who are not regular readers of the Oxford Dictionary, the defined concept of “training” is limited to the teaching of skills to humans and higher mammals. As Casey Stengel might have reminded us, “you could look it up.” As such, one can imagine that even Orwell would be impressed with the generative AI “training” label. Analogizing the unlicensed use of copyrighted works data bases as the backbone of commercial, derivative-making algorithmic systems into sixth grade violinists being taught to draw creative inspiration from the classics is no easy sleight of hand. Could his “Ministry of Truth” of 1984 done any better?

    Of course, this is not the first time that Silicon Valley has pulled this gambit. Some of us are old enough to remember the days when literature, music, film, and visual works weren’t referred to as “content,” but rather as “art.” Taking their cue from the sarcasm of Marshall McLuhan that “the medium is the message,” however, the technologists successfully used a linguistic substitute to turn protected cultural works in the minds of the public into the superfluous data that helps make our “great communications networks” (i.e., the mail room and similar delivery methodologies) thrive. McLuhan would be amused but not surprised by the number of humans duped into adoption of “content” as the term of choice that helped lead to a precipitous decline in respect for the legal and ethical protections extended to actual creators rather than delivery services, and the accompanying drop in the value of their artistic works personally and to national economies. McLuhan’s own cameo in Woody Allen’s “Annie Hall” foreshadowed it all.

    Are we really going to let this happen again? Even as Founder James Madison was writing the free speech principles of the First Amendment, he was contemporaneously noting in his contributions to the Federalist Papers that the societal benefit of protecting human authors and inventors was so obvious that no further discussion was necessary. Indeed, the US Constitution was written to include a directive in Article I Section 8 that Congress has the authority –and is thereby encouraged– to extend such protections to human authors and inventors, and Congress acted immediately to do so.

    The takeaway? In discussing oversight of generative AI systems and developing ways to protect human creators in harmony with the introduction of new technologies, we would all be wise to remember how much that words matter. The US Copyright Office must not be bullied or manipulated by Silicon Valley into buying into the linguistic somersault that the unauthorized ingesting of copyright-protected works into generative AI systems is somehow a happy-faced, fair use “training program,” rather than the derivative rights theft and counterfeiting process that in many cases it is.

    Unfortunately, the use of the of term “training” in recent Copyright Office releases indicates that this may already be happening. Even some proponents of creators’ rights seem unwittingly being led into the same “term of art” trap. They shouldn’t be. Haven’t creators, who for decades now have endured the electronic looting of their works being characterized as “sharing” –a la an episode of Mr. Rogers concerning playground etiquette– suffered enough from the effects of this pernicious, rhetorical nonsense?

    The question is not whether “training” is a nice word describing a noble human pursuit. It is. The question is whether we are going to allow it to stand again as a fig-leafed synonym for even greater theft from American and global creators through systemic, commercial, unlicensed exploitation via generative AI.

    New technology surely needs to be welcomed. But that enthusiasm for progress must be accompanied by licensing and fair royalty compensation (while retaining current safeguards against infringing acts of outright plagiarism) to satisfy both our culture and our Constitution. Let’s continue our march toward technological and cultural progress by calling this phony “training” process what it really is: unauthorized, generative AI ingestion of copyrighted works that needs to be licensed and paid for.

Comments are closed.