@ArtistRights Institute’s UK Government Comment on AI and Copyright: Why Can’t Creators Call 911?

We will be posting excerpts from the Artist Rights Institute’s comment in the UK’s Intellectual Property Office proceeding on AI and copyright. That proceeding is called a “consultation” where the Office solicits comments from the public (wherever located) about a proposed policy.

In this case it was the UK government’s proposal to require creators to “opt out” of AI data scraping by expanding the law in the UK governing “text and data mining” which is what Silicon Valley wants in a big way. This idea produced an enormous backlash from the creative community that we’ll also be covering in coming weeks as it’s very important that Trichordist readers be up to speed on the latest skulduggery by Big Tech in snarfing down all the world’s culture to train their AI (which has already happened and now has to be undone). For a backgrounder on the “text and data mining” controversy, watch this video by George York of the Digital Creators Coalition speaking at the Artist Rights Institute in DC.

In this section of the comment we offer a simple rule of thumb or policy guideline by which to measure the Government’s rules (which could equally apply in America): Can an artist file a criminal complaint against someone like Sam Altman?

If an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail

Why Can’t Creators Call 999 [or 911]?

We suggest a very simple policy guideline—if an artist is more likely to be able to get the police to stop their car from being stolen off the street than to get the police to stop the artist’s life’s work from being stolen online by a heavily capitalized AI platform, the policy will fail.  Alternatively, if an artist can call the police and file a criminal complaint against a Sam Altman or a Sergei Brin for criminal copyright infringement, now we are getting somewhere.

This requires that there be a clear “red light/green light” instruction that can easily be understood and applied by a beat copper.  This may seem harsh, but in our experience with the trillion-dollar market cap club, the only thing that gets their attention is a legal action that affects behavior rather than damages.  Our experience suggests that what gets their attention most quickly is either an injunction to stop the madness or prison to punish the wrongdoing. 

As a threshold matter, it is clear that AI platforms intend to continue scraping all the world’s culture for their purposes without obtaining consent or notifying rightsholders.  It is likely that the bigger platforms already have.  For example, we have found our own writings included in CoPilot outputs.  Not only did we not consent to that use, but we were also never asked.  Moreover, CoPilot’s use of these works clearly violates our terms of service.  This level of content scraping is hardly what was contemplated with the “data mining” exceptions. 

Faux “Data Mining” is the Key that Picks the Lock of Human Expression

The Artist Rights Institute filed a comment in the UK Intellectual Property Office’s consultation on Copyright and AI that we drafted. The Trichordist will be posting excerpts from that comment from time to time.

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets. 

We strongly disagree that all the world’s culture can be squeezed through the keyhole of “data” to be “mined” as a matter of legal definitions.  In fact, a recent study by leading European scholars have found that data mining exceptions were never intended to excuse copyright infringement:

Generative AI is transforming creative fields by rapidly producing texts, images, music, and videos. These AI creations often seem as impressive as human-made works but require extensive training on vast amounts of data, much of which are copyright protected. This dependency on copyrighted material has sparked legal debates, as AI training involves “copying” and “reproducing” these works, actions that could potentially infringe on copyrights. In defense, AI proponents in the United States invoke “fair use” under Section 107 of the [US] Copyright Act [a losing argument in the one reported case on point[1]], while in Europe, they cite Article 4(1) of the 2019 DSM Directive, which allows certain uses of copyrighted works for “text and data mining.”

This study challenges the prevailing European legal stance, presenting several arguments:

1. The exception for text and data mining should not apply to generative AI training because the technologies differ fundamentally – one processes semantic information only, while the other also extracts syntactic information

2. There is no suitable copyright exception or limitation to justify the massive infringements occurring during the training of generative AI. This concerns the copying of protected works during data collection, the full or partial replication inside the AI model, and the reproduction of works from the training data initiated by the end-users of AI systems like ChatGPT….[2] 

Moreover, the existing text and data mining exception in European law was never intended to address AI scraping and training:

Axel Voss, a German centre-right member of the European parliament, who played a key role in writing the EU’s 2019 copyright directive, said that law was not conceived to deal with generative AI models: systems that can generate text, images or music with a simple text prompt.[3]

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets.  This lust for data, control and money will drive lobbyists and Big Tech’s amen corner to seek copyright exceptions under the banner of “innovation.”  Any country that appeases AI platforms in the hope of cashing in on tech at the expense of culture will be appeasing their way towards an inevitable race to the bottom.  More countries can be predictably expected to offer ever more accommodating terms in the face of Silicon Valley’s army of lobbyists who mean to engage in a lightning strike across the world.  The fight for the survival of culture is on.  The fight for survival of humanity may literally be the next one up.  

We are far beyond any reasonable definition of “text and data mining.”  What we can expect is for Big Tech to seek to distract both creators and lawmakers with inapt legal diversions such as trying to pretend that snarfing down all with world’s creations is mere “text and data mining”.  The ensuing delay will allow AI platforms to enlarge their training databases, raise more money, and further the AI narrative as they profit from the delay and capital formation.


[1] Thomson-Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., (Case No. 1:20-cv-00613 U.S.D.C. Del. Feb. 11, 2025) (Memorandum Opinion, Doc. 770 rejecting fair use asserted by defendant AI platform) available at https://storage.courtlistener.com/recap/gov.uscourts.ded.72109/gov.uscourts.ded.72109.770.0.pdf (“[The AI platform]’s use is not transformative because it does not have a ‘further purpose or different character’ from [the copyright owner]’s [citations omitted]…I consider the “likely effect [of the AI platform’s copying]”….The original market is obvious: legal-research platforms. And at least one potential derivative market is also obvious: data to train legal AIs…..Copyrights encourage people to develop things that help society, like [the copyright owner’s] good legal-research tools. Their builders earn the right to be paid accordingly.” Id. at 19-23).  See also Kevin Madigan, First of Its Kind Decision Finds AI Training Is Not Fair Use, Copyright Alliance (Feb. 12, 2025) available at https://copyrightalliance.org/ai-training-not-fair-use/ (discussion of AI platform’s landmark loss on fair use defense).

[2] Professor Tim W. Dornis and Professor Sebastian Stober, Copyright Law and Generative AI Training – Technological and Legal Foundations, Recht und Digitalisierung/Digitization and the Law (Dec. 20, 2024)(Abstract) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4946214

[3] Jennifer Rankin, EU accused of leaving ‘devastating’ copyright loophole in AI Act, The Guardian (Feb. 19, 2025) available at https://www.theguardian.com/technology/2025/feb/19/eu-accused-of-leaving-devastating-copyright-loophole-in-ai-act

@ArtistRights Institute Newsletter 2/24/25: UK @TheIPO Special

The Artist Rights Institute’s news digest Newsletter

UK Intellectual Property Office Copyright and AI Consultation

Artist Rights Institute’s Submission in IPO Consultation

News Media Association “Make it Fair” Campaign at AI Consultation

Artists Protest at IPO Consultation

The Times (Letters to the Editor): Times letters: Protecting UK’s creative copyright against AI

The Telegraph (James Warrington, Dominic Penna): Kate Bush accuses ministers of silencing musicians in copyright row

BBC News (Paul Glynn): Artists release silent album in protest at AI copyright proposals

Reuters (Sam Tabahriti): Musicians release silent album to protest UK’s AI copyright changes

Forbes (Leslie Katz): 1,000-Plus Musicians Drop ‘Silent Album’ To Protest AI Copyright Tweaks

The Daily Mail (Andy Jehring): More than 1,000 musicians including Kate Bash and The Clash release ‘silent album’ to show the impact Labour’s damaging AI plans would have on the music industry

The Guardian (Dan Milmo): Kate Bush and Damon Albarn among 1,000 artists on silent AI protest album

The Guardian (Dan Milmo): Why are creatives fighting UK government AI proposals on copyright?

The Independent (Martyn Landi): Kate Bush and Annie Lennox have released a completely silent album – here’s why

The Evening Standard (Martyn Landi): Musicians protest against AI copyright plans with silent album release

The Independent (Chris Blackhurst)Voices: AI cannot be allowed to thrive at the expense of the UK’s creative industries

The Independent (Holly Evans): UK creative industries launch campaign against AI tech firms’ content use

Silent Album: Is This What We Want Campaign?

More than 1,000 musicians have come together to release Is This What You Want?, an album protesting the UK government’s proposed changes to copyright law.

In late 2024, the UK government proposed changing copyright law to allow artificial intelligence companies to build their products using other people’s copyrighted work – music, artworks, text, and more – without a licence.

The musicians on this album came together to protest this. The album consists of recordings of empty studios and performance spaces, representing the impact we expect the government’s proposals would have on musicians’ livelihoods.

All profits from the album are being donated to the charity Help Musicians.