Faux “Data Mining” is the Key that Picks the Lock of Human Expression

The Artist Rights Institute filed a comment in the UK Intellectual Property Office’s consultation on Copyright and AI that we drafted. The Trichordist will be posting excerpts from that comment from time to time.

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets. 

We strongly disagree that all the world’s culture can be squeezed through the keyhole of “data” to be “mined” as a matter of legal definitions.  In fact, a recent study by leading European scholars have found that data mining exceptions were never intended to excuse copyright infringement:

Generative AI is transforming creative fields by rapidly producing texts, images, music, and videos. These AI creations often seem as impressive as human-made works but require extensive training on vast amounts of data, much of which are copyright protected. This dependency on copyrighted material has sparked legal debates, as AI training involves “copying” and “reproducing” these works, actions that could potentially infringe on copyrights. In defense, AI proponents in the United States invoke “fair use” under Section 107 of the [US] Copyright Act [a losing argument in the one reported case on point[1]], while in Europe, they cite Article 4(1) of the 2019 DSM Directive, which allows certain uses of copyrighted works for “text and data mining.”

This study challenges the prevailing European legal stance, presenting several arguments:

1. The exception for text and data mining should not apply to generative AI training because the technologies differ fundamentally – one processes semantic information only, while the other also extracts syntactic information

2. There is no suitable copyright exception or limitation to justify the massive infringements occurring during the training of generative AI. This concerns the copying of protected works during data collection, the full or partial replication inside the AI model, and the reproduction of works from the training data initiated by the end-users of AI systems like ChatGPT….[2] 

Moreover, the existing text and data mining exception in European law was never intended to address AI scraping and training:

Axel Voss, a German centre-right member of the European parliament, who played a key role in writing the EU’s 2019 copyright directive, said that law was not conceived to deal with generative AI models: systems that can generate text, images or music with a simple text prompt.[3]

Confounding culture with data to confuse both the public and lawmakers requires a vulpine lust that we haven’t seen since the breathless Dot Bomb assault on both copyright and the public financial markets.  This lust for data, control and money will drive lobbyists and Big Tech’s amen corner to seek copyright exceptions under the banner of “innovation.”  Any country that appeases AI platforms in the hope of cashing in on tech at the expense of culture will be appeasing their way towards an inevitable race to the bottom.  More countries can be predictably expected to offer ever more accommodating terms in the face of Silicon Valley’s army of lobbyists who mean to engage in a lightning strike across the world.  The fight for the survival of culture is on.  The fight for survival of humanity may literally be the next one up.  

We are far beyond any reasonable definition of “text and data mining.”  What we can expect is for Big Tech to seek to distract both creators and lawmakers with inapt legal diversions such as trying to pretend that snarfing down all with world’s creations is mere “text and data mining”.  The ensuing delay will allow AI platforms to enlarge their training databases, raise more money, and further the AI narrative as they profit from the delay and capital formation.


[1] Thomson-Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., (Case No. 1:20-cv-00613 U.S.D.C. Del. Feb. 11, 2025) (Memorandum Opinion, Doc. 770 rejecting fair use asserted by defendant AI platform) available at https://storage.courtlistener.com/recap/gov.uscourts.ded.72109/gov.uscourts.ded.72109.770.0.pdf (“[The AI platform]’s use is not transformative because it does not have a ‘further purpose or different character’ from [the copyright owner]’s [citations omitted]…I consider the “likely effect [of the AI platform’s copying]”….The original market is obvious: legal-research platforms. And at least one potential derivative market is also obvious: data to train legal AIs…..Copyrights encourage people to develop things that help society, like [the copyright owner’s] good legal-research tools. Their builders earn the right to be paid accordingly.” Id. at 19-23).  See also Kevin Madigan, First of Its Kind Decision Finds AI Training Is Not Fair Use, Copyright Alliance (Feb. 12, 2025) available at https://copyrightalliance.org/ai-training-not-fair-use/ (discussion of AI platform’s landmark loss on fair use defense).

[2] Professor Tim W. Dornis and Professor Sebastian Stober, Copyright Law and Generative AI Training – Technological and Legal Foundations, Recht und Digitalisierung/Digitization and the Law (Dec. 20, 2024)(Abstract) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4946214

[3] Jennifer Rankin, EU accused of leaving ‘devastating’ copyright loophole in AI Act, The Guardian (Feb. 19, 2025) available at https://www.theguardian.com/technology/2025/feb/19/eu-accused-of-leaving-devastating-copyright-loophole-in-ai-act

@ArtistRights Institute Newsletter 2/24/25: UK @TheIPO Special

The Artist Rights Institute’s news digest Newsletter

UK Intellectual Property Office Copyright and AI Consultation

Artist Rights Institute’s Submission in IPO Consultation

News Media Association “Make it Fair” Campaign at AI Consultation

Artists Protest at IPO Consultation

The Times (Letters to the Editor): Times letters: Protecting UK’s creative copyright against AI

The Telegraph (James Warrington, Dominic Penna): Kate Bush accuses ministers of silencing musicians in copyright row

BBC News (Paul Glynn): Artists release silent album in protest at AI copyright proposals

Reuters (Sam Tabahriti): Musicians release silent album to protest UK’s AI copyright changes

Forbes (Leslie Katz): 1,000-Plus Musicians Drop ‘Silent Album’ To Protest AI Copyright Tweaks

The Daily Mail (Andy Jehring): More than 1,000 musicians including Kate Bash and The Clash release ‘silent album’ to show the impact Labour’s damaging AI plans would have on the music industry

The Guardian (Dan Milmo): Kate Bush and Damon Albarn among 1,000 artists on silent AI protest album

The Guardian (Dan Milmo): Why are creatives fighting UK government AI proposals on copyright?

The Independent (Martyn Landi): Kate Bush and Annie Lennox have released a completely silent album – here’s why

The Evening Standard (Martyn Landi): Musicians protest against AI copyright plans with silent album release

The Independent (Chris Blackhurst)Voices: AI cannot be allowed to thrive at the expense of the UK’s creative industries

The Independent (Holly Evans): UK creative industries launch campaign against AI tech firms’ content use

Silent Album: Is This What We Want Campaign?

More than 1,000 musicians have come together to release Is This What You Want?, an album protesting the UK government’s proposed changes to copyright law.

In late 2024, the UK government proposed changing copyright law to allow artificial intelligence companies to build their products using other people’s copyrighted work – music, artworks, text, and more – without a licence.

The musicians on this album came together to protest this. The album consists of recordings of empty studios and performance spaces, representing the impact we expect the government’s proposals would have on musicians’ livelihoods.

All profits from the album are being donated to the charity Help Musicians.

@FTC to AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive

An important position paper from the Federal Trade Commission about AI:

You may have heard that “data is the new oil”—in other words, data is the critical raw material that drives innovation in tech and business, and like oil, it must be collected at a massive scale and then refined in order to be useful. And there is perhaps no data refinery as large-capacity and as data-hungry as AI. 

Companies developing AI products, as we have noted, possess a continuous appetite for more and newer data, and they may find that the readiest source of crude data are their own userbases. But many of these companies also have privacy and data security policies in place to protect users’ information. These companies now face a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users’ privacy….

It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy. (emphasis in original)…

The FTC will continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development. Ultimately, there’s nothing intelligent about obtaining artificial consent.

Read the post on FTC

DMCA Take Two: UK Government is to Propose Death Blow Opt-Out for AI Training

Americans are freedom loving people, and nothing says freedom like getting away with it.
Long Long Time, written by Guy Forsyth

Big Tech is jamming another safe harbor boondoggle through another government, this time for artificial intelligence. The defining feature of the DMCA scam is every artist in the known universe having to single-handedly monitor the entire Internet to catch each instance of theft in the act. Once caught, artists have to send a DMCA notice on a case by case basis, and then overcome what is 9 times out of 10 a BS counternotification. Then if they disagree with the BS counternotification, artists are faced with having to file a federal copyright infringement lawsuit which they don’t file because they can’t afford it.

And so it goes.

This is what an “opt-out” looks like. We have seen this movie before and we know how it ends–it’s called getting away with it. Let us be very clear with lawmakers: Notice and takedown and “opt out” is bullshit. It has never worked and has imposed a phenomenal cost on the artist community to the point that many if not most artists have just given up. The Future of Music Coalition and A2IM surveyed their members and determined that over half don’t even bother to look anymore because they can’t afford to run the search. The next largest group give up because they get no response from the notices.

Let’s understand–every time an artist gives up even looking for infringers, that’s a win for Big Tech. That’s why year after year, there are over a billion DMCA notices sent to a variety of infringers.

Ask yourself in all honesty, are you surprised? What head up the ass buffoon would ever think that an opt out would work? Unless the plan was to let Big Tech run wild and give both the biggest corporations in commercial history and the lawmakers a big fig leaf to cover up the theft?

That same approach is rearing its head again in both the US Congress and the UK. But this time it is being applied to artificial intelligence training and outputs. This is stark raving madness, drooling idiocy. At least with the DMCA an artist could look for an actual copy of their works that could be found by text-based search, audio fingerprints or just listening.

With AI, the whole point is to disguise the underlying work used to train the AI. The AI platform operator knows what works they used, which sites they scraped, or other ways to identify the infringed works. When sued, these operators have refused to disclose the training materials because they say that the sources of those materials are supposedly a trade secret and confidential.

Once a work is ingested into the AI, the output is also purposely distorted from the original. Again, impossible to conclusively identify. So what exactly are you opting out of? To whom do you send your little notice?

This entire opt-out idea is through the looking glass into the upside down world. Yet is is true.

The most current manifestation of this insanity is the UK government’s intention to pass legislation that would force artists to use an opt-out model, possibly on a work-by-work basis. And the worst part is that somehow they have been led to think that an opt-out is a protection for artists.

Orwellian.

Fortunately the UK government may seek public comment on this opt-out proposal. We will keep you posted on what the UK government actually proposes and how you can comment.

In the meantime, if you live in the UK, it’s not to early to contact your MP and ask them what the hell is going on. You may want to ask them why you can call the police when your car is being stolen but there’s nobody to call when your life’s work is being stolen. Particularly when the government protects the thieves.

Updates for Nov. 20 @ArtistRights Symposium at @AmericanU @KogodBiz in Washington DC

We are announcing the time schedule and speakers for the 4th annual Artist Rights Symposium on November 20. The symposium is supported by the Artist Rights Institute and was founded by Dr. David C. Lowery, Lecturer at the University of Georgia Terry College of Business.

This year the symposium is hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  We are also pleased to have a Kogod student presentation on speculative ticketing as part of the speaker lineup.

Admission is free, but please reserve a spot with Eventbrite, seating is limited!

The symposium starts at 8:30 am and ends with a reception at 4:30pm. The symposium will be recorded as an audiovisual presentation for distribution at a later date, but will not be live-streamed. If you attend, understand that you may be filmed in any audience shots, questions from the floor or still images. The symposium social media hashtag is #ArtistRightsKogod.

Schedule

8:30 — Doors open, networking coffee.

9:00-9:10 — Welcome remarks by David Marchick, Dean, Kogod School of Business

9:10-9:15 — Welcome remarks by Christian L. Castle, Esq., Director, Artist Rights Institute

9:15-10:15 — THE TROUBLE WITH TICKETS:  The Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas

10:15-10:30: NIVA Speculative Ticketing Project Presentation by Kogod students

10:30-10:45: Coffee break

10:45-11:00: OVERVIEW OF CURRENT ISSUES IN ARTIFICIAL INTELLIGENCE LITIGATION: Kevin Madigan, Vice President, Legal Policy and Copyright Counsel, Copyright Alliance

11:00-12 pm: SHOW ME THE CREATOR – Transparency Requirements for AI Technology:

Danielle Coffey, President & CEO, News Media Alliance, Arlington, Virginia
Dahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington, DC
Ken Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DC 

Moderator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

12:00-12:30: Lunch break

12:30-1:30: Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.

1:30-1:45: Coffee break

1:45-2:45: CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It

Richard James Burgess, MBE, President & CEO, American Association of Independent Music, New York
Helienne Lindvall, President, European Composer & Songwriter Alliance, London, England
Abby North, President, North Music Group, Los Angeles
Anjula Singh, Chief Financial Officer and Chief Operating Officer, SoundExchange, Washington DC

Moderator:  Christian L. Castle, Esq, Director, Artist Rights Institute, Austin, Texas

2:45-3:15: Reconvene across street to International Service Founders Room for concluding speakers and reception

3:15-3:30: OVERVIEW OF INTERNATIONAL ARTIFICIAL INTELLIGENCE LEGISLATION: George York, Senior Vice President International Policy from RIAA.

3:30-4:30: NAME, IMAGE AND LIKENESS RIGHTS IN THE AGE OF AI:  Current initiatives to protect creator rights and attribution

Jeffrey Bennett, General Counsel, SAG-AFTRA, Washington, DC
Jen Jacobsen, Executive Director, Artist Rights Alliance, Washington DC
Jalyce E. Mangum, Attorney-Advisor, U.S. Copyright Office, Washington DC

Moderator
John Simson, Program Director Emeritus, Business & Entertainment, Kogod School of Business, American University

4:30-5:30: Concluding remarks by Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program and reception.

Now with added retroactive acrobatics: @DamianCollins calls on UK Prime Minister to stop Google’s “Text and Data Mining” Circus

By Chris Castle

Damian Collins (former chair of the UK Parliament’s Digital Culture Media and Sport Select Committee) warns of Google’s latest artificial intelligence shenanigans in a must-read opinion piece in the Daily Mail. Mr. Collins highlights Google’s attempt to lobby its way into what is essentially a retroactive safe harbor to protect Google and its confederates in the AI land grab. (Safe harbors aka pirate utopias.)

While Mr. Collins writes about Google’s efforts to rewrite the laws of the UK to free ride in his home country which is egregious bullying, the episode he documents is instructive for all of us. If Google & Co. will do it to the Mother of Parliaments, it’s only a matter of time until Google & Co. do the same everywhere or know the reason why. Their goal is to hoover up all the world’s culture that the AI platforms have not scraped already and–crucially–to get away with it. And as Austin songwriter Guy Forsyth says, “…nothing says freedom like getting away with it.”

The timeline of AI’s appropriation of all the world’s culture is a critical understanding to appreciate just how depraved Big Tech’s unbridled greed really is. The important thing to remember is that AI platforms like Google have been scraping the Internet to train their AI for some time now, possibly many years. This apparently includes social media platforms they control. My theory is that Google Books was an early effort at digitization for large language models to support products like corpus machine translation as a predecessor to Gemini (“your twin”) and other Google AI products. We should ask Ray Kurzweil.

There is starting to be increasing evidence that this is exactly what these people are up to. 

The New York Times Uncovers the Crimes

According to an extensive long-form report in the New York Times by a team of very highly respected journalists, it turns out that Google has been planning this “Text and Data Mining” land grab for some time. At the very moment YouTube was issuing press releases about their Music AI Incubator and their “partners”–Google was stealing anything that was not nailed down that anyone had hosted on their massive platforms, including Google Docs, Google Maps, and…YouTube. The Times tells us:

Google transcribed YouTube videos to harvest text for its A.I. models, five people with knowledge of the company’s practices said. That potentially violated the copyrights to the videos, which belong to their creators….Google said that its A.I. models “are trained on some YouTube content,” which was allowed under agreements with YouTube creators, and that the company did not use data from office apps outside of an experimental program. 

I find it hard to believe that YouTube was both allowed to transcribe and scrape under all its content deals, or that they parsed through all videos to find the unprotected ones that fall victim to Google’s interpretation of the YouTube terms of use. So as we say in Texas, that sounds like bullshit for starters. 

How does this relate to the Text and Data Mining exception that Mr. Collins warns of? Note that the NYT tells us “Google transcribed YouTube videos to harvest text.” That’s a clue.

As Mr. Collins tells us: 

Google [recently] published a policy paper entitled: Unlocking The UK’s AI Potential.

What’s not to like?, you might ask. Artificial intelligence has the potential to revolutionise our economy and we don’t want to be left behind as the rest of the world embraces its benefits.

But buried in Google’s report is a call for a ‘text and data mining’ (TDM) exception to copyright. 

This TDM exception would allow Google to scrape the entire history of human creativity from the internet without permission and without payment.

And, of course, Mr. Collins is exactly correct, it’s safe to assume that’s exactly what Google have in mind. 

The Conspiracy of Dunces and the YouTube Fraud

In fairness, it wasn’t just Google ripping us off, but Google didn’t do anything to stop it as far as I can tell. One thing to remember is that YouTube was, and I think still is, not very crawlable by outsiders. It is almost certainly the case that Google would know who was crawling youtube.com, such as Bingbot, DuckDuckBot, Yandex Bot, or Yahoo Slurp if for no other reason that those spiders were not googlebot. With that understanding, the Times also tells us:

OpenAI researchers created a speech recognition tool called Whisper. It could transcribe the audio from YouTube videos, yielding new conversational text that would make an A.I. system smarter.

Some OpenAI employees discussed how such a move might go against YouTube’s rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are “independent” of the video platform. [Whatever “independent” means.]

Ultimately, an OpenAI team transcribed more than one million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI’s president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4, which was widely considered one of the world’s most powerful A.I. models and was the basis of the latest version of the ChatGPT chatbot….

OpenAI eventually made Whisper, the speech recognition tool, to transcribe YouTube videos and podcasts, six people said. But YouTube prohibits people from not only using its videos for “independent” applications, but also accessing its videos by “any automated means (such as robots, botnets or scrapers).” [And yet it happened…]

OpenAI employees knew they were wading into a legal gray area, the people said, but believed that training A.I. with the videos was fair use. [Or could they have paid for the privilege?]

And strangely enough, many of the AI platforms sued by creators raise “fair use” as a defense (if not all of the cases) which is strangely reminiscent of the kind of crap we have been hearing from these people since 1999.

Now why might Google have permitted OpenAI to crawl YouTube and transcribe videos (and who knows what else)? Probably because Google was doing the same thing. In fact, the Times tells us:

Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn’t stop OpenAI because Google had also used transcripts of YouTube videos to train its A.I. models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said.

So Google and its confederate OpenAI may well have conspired to commit massive copyright infringement against the owner of a valid copyright, did so willingly, and for purposes of commercial advantage and private financial gain. (Attempts to infringe are prohibited to the same extent as the completed act). The acts of these confederates vastly exceed the limits for criminal prosecution for both infringement and conspiracy.

But to Mr. Collins’ concern, the big AI platforms transcribed likely billions of hours of YouTube videos to manipulate text and data–you know, TDM.

The New Retroactive Safe HarborThe Flying Googles Bring their TDM Circus Act to the Big Tent With Retroactive Acrobatics

But also realize the effect of the new TDM exception that Google and their Big Tech confederates are trying to slip past the UK government (and our own for that matter). A lot of the discussion about AI rulemaking acts as if new rules would be for future AI data scraping. Au contraire mes amis–on the contrary, the bad acts have already happened and they happened on an unimaginable scale.

So what Google is actually trying to do is get the UK to pass a retroactive safe harbor that would deprive citizens of valuable property rights–and also pass a prospective safe harbor so they can keep doing the bad acts with impunity.

Fortunately for UK citizens, the UK Parliament has not passed idiotic retroactive safe harbor legislation like the U.S. Congress has. I am, of course, thinking of the vaunted Music Modernization Act (MMA) that drooled its way to a retroactive safe harbor for copyright infringement, a shining example of the triumph of corruption that has yet to be properly challenged in the US on Constitutional grounds. 

There’s nothing like the MMA absurdity in the UK, at least not yet. However, that retroactive safe harbor was not lost on Google, who benefited directly from it. They loved it. They hung it over the mantle next to their other Big Game trophy, the DMCA. And now they’d like to do it again for the triptych of legislative taxidermy.

Because make no mistake–a retroactive safe harbor would be exactly the effect of Google’s TDM exception. Not to mention it would also be a form of retroactive eminent domain, or what the UK analogously might call the compulsory purchase of property under the Compulsory Purchase of Property Act. Well…”purchase” might be too strong a word, more like “transfer” because these people don’t intend to pay for a thing.

The effect of passing Google’s TDM exception would be to take property rights and other personal rights from UK citizens without anything like the level of process or compensation required under the Compulsory Purchase of Property–even when the government requires the sale of private property to another private entity (such as a railroad right of way or a utility easement).

The government is on very shaky ground with a TDM exception imposed by the government for the benefit of a private company, indeed foreign private companies who can well afford to pay for it. It would be missing government oversight on a case-by-base basis, no proper valuation, and for entirely commercial purposes with no public benefit. In the US, it would likely violate the Takings Clause of our Constitution, among other things.

It’s Not Just the Artists

Mr. Collins also makes a very important point that might get lost among the stars–it’s not just the stars that AI is ripping off–it is everyone. As the New York Times story points out (and it seems that there’s more whistleblowers on this point every day), the AI platforms are hoovering up EVERYTHING that is on the Internet, especially on their affiliated platforms. That includes baby videos, influencers, everything.

This is why it is cultural appropriation on a grand scale, indeed a scale of depravity that we haven’t seen since the Nurenberg Trials. A TDM exception would harm all Britons in one massive offshoring of British culture.

[This post first appeared on MusicTech.Solutions]

CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It: Speaker Update for Nov. 20 @ArtistRights Symposium at @AmericanU @KogodBiz in Washington DC

We’re pleased to announce additional speakers for the 4th annual Artist Rights Symposium on November 20, this year hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  The symposium is also supported by the Artist Rights Institute and was founded by Dr. David Lowery, Lecturer at the University of Georgia Terry College of Business.

The Symposium has four panels and a lunchtime keynote. Panels will begin at 8:30 am and end by 5 pm, with lunch and refreshments. More details to follow. Contact the Artist Rights Institute for any questions.

Admission is free, but please reserve a spot with Eventbrite, seating is limited! (Eventbrite works best with Firefox)

Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.  Graham will speak around lunchtime.

We have confirmed speakers for another topic! 

CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It

Richard James Burgess, MBE, President & CEO, American Association of Independent Music, New York
Helienne Lindvall, President, European Composer & Songwriter Alliance, London, England
Abby North, President, North Music Group, Los Angeles
Anjula Singh, Chief Financial Officer and Chief Operating Officer, SoundExchange, Washington DC

Moderator:  Christian L. Castle, Esq, Director, Artist Rights Institute, Austin, Texas

Previously confirmed panelists are:

SHOW ME THE CREATOR – Transparency Requirements for AI Technology:

Danielle Coffey, President & CEO, News Media Alliance, Arlington, Virginia
Dahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington, DC
Ken Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DC 

Moderator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

THE TROUBLE WITH TICKETS:  The Economics and Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas

SHOW ME THE CREATOR – Transparency Requirements for AI Technology: Speaker Update for Nov. 20 @ArtistRights Symposium at @AmericanU @KogodBiz in Washington DC

We’re pleased to announce more speakers for the 4th annual Artist Rights Symposium on November 20, this year hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  The symposium is also supported by the Artist Rights Institute and was founded by Dr. David Lowery, Lecturer at the University of Georgia Terry College of Business.

The four panels will begin at 8:30 am and end by 5 pm, with lunch and refreshments. More details to follow. Contact the Artist Rights Institute for any questions.

Admission is free, but please reserve a spot with Eventbrite, seating is limited! (Eventbrite works best with Firefox)

Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.  Graham will speak around lunchtime.

We have confirmed speakers for another topic! 

SHOW ME THE CREATOR – Transparency Requirements for AI Technology:

Danielle Coffey, President & CEO, News Media Alliance, Arlington, Virginia
Dahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington, DC
Ken Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DC 

Moderator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

Previously announced:

THE TROUBLE WITH TICKETS:  The Economics and Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas