In the fight for AI Justice, “The music industry is the tip of the spear” @MikeHuppe #IRespectMusic

Get smart about the NO AI Fraud Act with this MTP Mini Brief on the NO AI Fraud Act.

@RepMariaSalazar and @RepDean Introduce No AI Fraud Act to protect artists against AI Fakes #irespectmusic @human_artistry

Press Release

SUPPORT THE No AI FRAUD ACT

AI-Generated Fakes Threaten All Americans

New personalized generative artificial intelligence (AI) cloning models and services have enabled human impersonation and allow users to make unauthorized fakes using the images and voices of others. The abuse of this quickly advancing technology has affected everyone from musical artists to high school students whose personal rights have been violated.

AI-generated fakes and forgeries are everywhere. While AI holds incredible promise, Americans deserve common sense rules to ensure that a person’s voice and likeness cannot be exploited without their permission.

The Threat Is Here

Protection from AI fakes is needed now. We have already seen the kinds of harm these cloning models can inflict, and the problem won’t resolve itself.

From an AI-generated Drake/The Weeknd duet, to Johnny Cash singing “Barbie Girl,” to “new” songs by Bad Bunny that he never recorded to a false dental plan endorsement featuring Tom Hanks, unscrupulous businesses and individuals are hijacking professionals’ voices and images, undermining the legitimate works and aspirations of essential contributors to American culture and commerce.

But AI fakes aren’t limited to famous icons. Last year, nonconsensual, intimate AI fakes of high school girls shook a New Jersey town. Such lewd and abusive AI fakes can be generated and disseminated with ease. And without prompt action, confusion will continue to grow about what is real, undermining public trust and risking harm to reputations, integrity, and human wellbeing.   

Inconsistent State Laws Aren’t Enough

The existing patchwork of state laws needs bolstering with a federal solution that provides baseline protections, offering meaningful recourse nationwide.

The No AI FRAUD Act Provides Needed Protection

The No AI Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act of 2024 builds on effective elements of state and federal law to:

  • Reaffirm that everyone’s likeness and voice is protected, giving individuals the right to control the use of their identifying characteristics.
  • Empower individuals to enforce this right against those who facilitate, create, and spread AI frauds without their permission.
  • Balance the rights against the 1st Amendment to safeguard speech and innovation.

The No AI FRAUD Act is an important and necessary step to protect our valuable and unique personal identities.

What would Lars say? Artificial Intelligence: Nobel or RICO?

All the true promise of AI does not require violating writers, artists, photographers, voice actors etc copyrights and rights of publicity. You know, stuff like reading MRIs and X-rays, developing pharmaceuticals, advanced compounds, new industrial processes, etc.

All the shitty aspects of AI DO require intentional mass copyright infringement (a RICO predicate BTW). You know stuff like bots, deep fakes, autogenerated “yoga mat” music, SEO manipulation, autogenerated sports coverage, commercial chat bots, fake student papers, graphic artist knockoffs, robot voice actors etc. But that’s where the no-value-add-parasitic-free-rider-easy-money is to be made. That’s why the parasitic free-riding VCs and private equity want to get a “fair use” copyright exemption.

Policy makers should understand that if they want to reduce the potential harms of AI they need to protect and reinforce intellectual property rights of individuals. It is a natural (and already existing) brake on harmful AI. What we don’t need is legislative intervention that makes it easier to infringe IP rights and then try to mitigate (the easily predictable and obvious) harms with additional regulation.

This is what happened with Napster and internet 1.0. The DMCA copyright infringement safe harbor for platforms unleashed all sorts of negative externalities that were never fairly mitigated by subsequent regulation.

Why do songwriters get 0.0009 a stream on streaming platforms? Because the platforms used the threat of the DMCA copyright safe harbor by “bad actors” (often connected to the “good actors” via shared board members and investors*) to create a market failure that destroyed the value of songs. To “fix” the problem federal legislation tasks the Copyright Royalty Board in LOC to set royalty rates and forced songwriters to license to the digital platforms (songwriters can not opt out). The royalty setting process was inevitably captured by the tech companies and that’s how you end up with 0.0009 per stream.

TBF the DMCA safe harbor requires the platforms to set up “technical measures” to prevent unlicensed use of copyrights, but this part of the DMCA safe harbor were never implemented and the federal government never bothered to enforce this part of the law. This is the Napster playbook all over again.

1. Unleash a technology that you know will be exploited by bad actors**.

2. Ask for federal intervention that essentially legalizes the infringing behavior.

3. The federal legislation effectively creates private monopoly or duopoly.

4. Trillions of dollars in wealth transferred from creators to a tiny cabal of no-value-add-parasitic-free-rider-easy-money VCs in silicon valley.

5. Lots of handwringing about the plight of creators.

6. Bullshit legislation that claims to help creators but actually mandates a below market rate for creators.

The funny thing is Lars Ulrich was right about Napster. [See our 2012 post Lars Was First and Lars Was Right.] At the time he was vilified by what in reality was a coordinated DC communication firm (working for Silicon Valley VCs) that masqueraded as grassroots operation.

But go back and watch the Charlie Rose debate between Lars Ulrich and Chuck D, everything Lars Ulrich said was gonna happen happened.

If Lars Ulrich hadn’t been cowed by a coordinated campaign by no-value-add-parasitic-free-rider-easy-money Silicon Valley VCs, he’d probably say the same thing about AI.

And he’d be right again.

Must Read Post by @ednewtonrex on Why He Resigned from Stability AI Over Fake Fair Use Defense

I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’. 

First off, I want to say that there are lots of people at Stability who are deeply thoughtful about these issues. I’m proud that we were able to launch a state-of-the-art AI music generation product trained on licensed training data, sharing the revenue from the model with rights-holders. I’m grateful to my many colleagues who worked on this with me and who supported our team, and particularly to Emad for giving us the opportunity to build and ship it. I’m thankful for my time at Stability, and in many ways I think they take a more nuanced view on this topic than some of their competitors. 

But, despite this, I wasn’t able to change the prevailing opinion on fair use at the company. 

This was made clear when the US Copyright Office recently invited public comments on generative AI and copyright, and Stability was one of many AI companies to respond. Stability’s 23-page submission included this on its opening page: 

“We believe that Al development is an acceptable, transformative, and socially-beneficial use of existing content that is protected by fair use”. 

For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with. 

I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use. 

But setting aside the fair use argument for a moment — since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright. 

To be clear, I’m a supporter of generative AI. It will have many benefits — that’s why I’ve worked on it for 13 years. But I can only support generative AI that doesn’t exploit creators by training models — which may replace them — on their work without permission. 

I’m sure I’m not the only person inside these generative AI companies who doesn’t think the claim of ‘fair use’ is fair to creators. I hope others will speak up, either internally or in public, so that companies realise that exp

Fakery Abounds: DLC Lawyer Caught

Read up on MusicTechPolicy. Remember the “DLC” is the Digital Licensee Coordinator who represents the services against songwriters and pays for the MLC. Talk about your interlocking boards!

Five Points for Potential AI Framework Agreements

By Chris Castle

This post first appeared on MusicTechPolicy

When you see Big Tech start to make Newspeak noises about wanting to license creative works for artificial intelligence, it’s well to remember a couple facts about how they treat people, business practices that they don’t talk about at parties. Or to Congress.

Take their supply chain, particularly their manufacturing supply chain in China where some of all their products use slave labor. And the cobalt that goes into every battery powered device like your smartphone is obtained through the equally Newspeak “artisanal mining” otherwise known as impossibly poor children mining cobalt by clawing it out of the dirt with their bare hands. You know, “artisanal”. (Read Cobalt Red by Sid Kara for that story.). Not to mention the grotesque and parasitic waste of electricity and the resources that provide it whether they are crowding out the public investment in renewables or driving coal powered generators. They don’t talk about it because they feel entitled to all of it which is to be expected from that feeder school for the Silicon Valley elites built with blood money from the Central Pacific Railroad.

So when you sit down at the negotiating table with these people, this is who they really are. That realization tells you a few things, but it mainly tells you they simply cannot be trusted in either life choices or in business choices.

Universal has taken a real leadership role in the AI negotiations that has both respected their artists and songwriters and given teeth to the principles of the Human Artistry Campaign. First of all, the company has made it clear that they are going to support their artists and songwriters in having a meaningful seat at the table. They will not send their artists to the charnel house. The only artists who participate will be the artists who decide to participate–opt in rather than Google’s preferred “opt out” structure which relies on the abuse of various safe harbors at scale. 

It appears that until such time as both the artists and songwriters and Universal are comfortable with the integrity of the creative and business model of YouTube’s AI music suite of tools, there’s no deal. Negotiations presumably will continue so there may be at least a commercial frameworks. 

To that end, here are five points that might prove useful.

  1. Artists and songwriters need to be at the table: One takeaway from the frozen mechanicals experience is how necessary it is for the creators to be included–not through an organization but actual individuals who speak for themselves and are not influenced by lobbyists.  Universal has proven that this is possible. This is a huge advancement in label-artist relations and publisher-writer relations, particularly because it’s obvious from the creators who stepped forward that these are articulate independent thinkers who are not going to tow the party line.  That is the whole idea. If you don’t trust your artists and writers enough to give them freedom to speak their minds, then let’s face it–there’s something wrong with your business model.
  2. All AI licenses should be opt in: Most of YouTube’s many artist relations issues arise from artists not having the right and ability to withhold their work from whatever the platform is. This is particularly true with UGC and advertising supported platforms. When you have poured out your soul in a recording that ends up with ads for drugs or miracle hair replacement treatments, it’s deflating and if anyone asked for approval, you’d probably decline. Which is why you negotiate marketing restrictions that prevent your music being used in advertising.
  3. No blind check deals and no “big pool” royalties: We haven’t gotten to the royalty rates yet, but there will be riots in the streets if anyone tries to perpetuate YouTube-style accountings, the grotesquely unfair TikTok blind check deals or “big pool” market centric royalties. AI gives us all a chance to get it right and build a new system that is artist centric. It’s encouraging that Lucian Grainge’s blog post announcing the relationship with YouTube is entitled “An artist centric approach to AI innovation” which is consistent with his prior statements about making streaming royalties more fair.
  4. Ability to track and account is a precondition: It should go without saying that in order to have meaningful royalty accounting, the service must have the ability to track and account. This is especially challenging in AI given the “training” issues. I will be pleasantly shocked if Google engineers designing the music AI tools have not entirely ignored tracking and accounting which they typically have viewed as a bug, not a feature. This is what gives rise to the blind check deals and other unworkable approaches which are most definitely not “artist centric.” Accordingly, the need to issue per work reports is essential.
  5. Audits should be much more frequent: This new product is a chance to revisit the standard approaches to auditing which have unfortunately become perpetuated in digital deals and most prominently in the Music Modernization Act (Title I). There is not much difference between the MMA audit rights and the audit clause from a 30 year old record deal notwithstanding the vast difference in commerce between the two. With AI, not only have the DSPs blown up the album to a commercial singles world, they are now trying to blow up the single to mind-numbing fragmentation. Potentially, this world will be like selling stems. This ushers in a whole new need for minimum viable data laws and enforcement for using standard identifiers.

There will be many other issues to address, but I think if we don’t address these key points we’ll find ourselves to be artisanal workers scratching out a living for ChatGPT. 

Press Release: Universal and YouTube Announce AI Music Principles Consistent with Human Artistry Campaign and Artist Advisors

NEW YORK, Aug. 21, 2023 /PRNewswire/ — Today YouTube published a first ever set of AI music principles and launched the YouTube Music AI Incubator, kicking off with artists, songwriters and producers from Universal Music Group. YouTube’s three fundamental AI music principles are rooted in its commitment to collaborate with the music industry alongside bold and responsible innovation in the space. 

YouTube CEO, Neal Mohan, shared the platform’s AI music principles and his vision for how the framework will enhance creative expression while also protecting artists on the platform. The principles include:

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners. As generative AI unlocks ambitious new forms of creativity, YouTube and our partners across the music industry agree to build on our long collaborative history and responsibly embrace this rapidly advancing field.  Our goal is to partner with the music industry to empower creativity in a way that enhances our joint pursuit of responsible innovation. 
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate. We’re continuing our strong track record of protecting the creative work of artists on YouTube. We’ve made massive investments over the years in the systems that help balance the interests of copyright holders with those of the creative community on YouTube.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI. We spent years investing in the policies and trust and safety teams that help protect the YouTube community, and we’re also applying these safeguards to AI-generated content. Generative AI systems may amplify current challenges like trademark and copyright abuse, misinformation, spam, and more. But AI can also be used to identify this sort of content, and we’ll continue to invest in the AI-powered technology that helps us protect our community of viewers, creators, artists and songwriters–from Content ID to policies and detection and enforcement systems that keep our platform safe behind the scenes. And we commit to scaling this work even further. 

In a rare guest YouTube blog Universal Music Group Chairman and CEO, Sir Lucian Grainge – who helped shape the principles – shared his vision for an artist centric approach to generative AI and how partnering with YouTube would best position the music industry for success as this technology continues to develop. Excerpts from the blog post include: 

  • “Our challenge and opportunity as an industry is to establish effective tools, incentives and rewards – as well as rules of the road – that enable us to limit AI’s potential downside while promoting its promising upside. If we strike the right balance, I believe AI will amplify human imagination and enrich musical creativity in extraordinary new ways.”
  • “Our enduring faith in human creativity is the bedrock of Universal Music Group’s collaboration with YouTube on the future of AI. Central to our collective vision is taking steps to build a safe, responsible and profitable ecosystem of music and video—one where artists and songwriters have the ability to maintain their creative integrity, their power to choose, and to be compensated fairly.”
  • “Today, our partnership is building on that foundation with a shared commitment to lead responsibly, as outlined in YouTube’s AI principles, where Artificial Intelligence is built to empower human creativity, and not the other way around.  AI will never replace human creativity because it will always lack the essential spark that drives the most talented artists to do their best work, which is intention. From Mozart to The Beatles to Taylor Swift, genius is never random.” 

Today’s announcement also introduced YouTube’s AI Music Incubator, a program that will bring together some of today’s most innovative artists, songwriters, and producers to help inform YouTube’s approach to generative AI in music. The incubator will kick off with a genre-spanning cohort of creatives from Universal Music Group, that includes Anitta, Björn Ulvaeus, d4vd, Don Was, Juanes, Louis Bell, Max Richter, Rodney Jerkins, Rosanne Cash, Ryan Tedder, Yo Gotti, and the Estate of Frank Sinatra, amongst others. 

  • Björn Ulvaeus shares: “While some may find my decision controversial, I’ve joined this group with an open mind and purely out of curiosity about how an AI model works and what it could be capable of in a creative process. I believe that the more I understand, the better equipped I’ll be to advocate for and to help protect the rights of my fellow human creators.”
  • Juanes shares: “Music is fundamental to the human experience – culturally and personally. For artists, our music is part of who we are. Given music’s role, artists must play a central role in helping to shape the future of this technology.  I’m looking forward to working with Google and YouTube as part of this influential group of UMG artists to assure that AI develops responsibly as a tool to empower artists and that it is used respectfully and ethically in ways that amplify human musical expression for generations to come.”
  • Max Richter shares: Like every new technology, AI brings with it opportunities, but it also raises profound challenges for the creative community. The tech world and the music distribution ecosystem are quickly evolving to embrace this transformative technology and, unless artists are part of this process, there is no way to ensure that our interests will be taken into account. We have to be in this conversation, or our voices won’t be heard. Therefore, I’m very happy to be part of the “artist incubator” which will allow me to advocate for the interests of the creative community in the applications of AI to music and music distribution.” 

Neal Mohan Blog : https://blog.youtube/inside-youtube/partnering-with-the-music-industry-on-ai/
Sir Lucian Grainge Guest Blog: https://blog.youtube/news-and-events/an-artist-centric-approach-to-ai-innovation/

@tinadaunt: Universal Music Exec Jeff Harleston Calls On Senate to Regulate AI: ‘Ensure Creators Are Respected and Protected’

Companies using artificial intelligence software are shamelessly ripping off artists from film and music, and it will get worse if not regulated, members of the entertainment industry told U.S. Senators at a hearing Wednesday.

“AI in the service of artists and creativity can be a very, very good thing,” executive vice president of business and legal affairs for Universal Music Group Jeffrey Harleston said. “But AI that uses or worse yet appropriates the work of these artists, their name, their image, their likeness, their voice, without authorization, without consent, simply is not a good thing. Congress needs to establish rules that ensure creators are respected and protected.”

Read the post on The Wrap

Jeff Harleston also said in his statement:

Long before an AI-generated recording imitating Drake and The Weeknd – both Universal Music artists – went viral and captured the attention of press and policymakers, UMG has been thinking about artificial intelligence. One of our companies, Ingrooves, has three patents in AI to assist with marketing independent artists. And AI has long been used as a tool in the studio: For example, Apple Logic Pro X to generate drum tracks, or Captain Plugins to generate chord progressions. We also use AI regularly as a tool to assist in creating Dolby Atmos immersive audio music. It’s a great technology when employed responsibly – and one that we and our artists use. 

However, we are before you today because generative AI is raising fundamental issues of responsibility in the creative industries and copyright space. Each day, troubling examples emerge. We know some generative AI engines have been trained on our copyrighted library of recordings and lyrics, image generators have been trained on our copyrighted cover art, and music generators have been trained on our copyrighted music, all without authorization. 

We have a robust digital music marketplace, and UMG has hundreds of legitimate partners who’ve worked with us to bring music to fans in a myriad of ways. Those companies and services properly obtained the rights they need to operate from UMG, or from the associated record labels and publishers. So, it’s unfathomable to think AI companies and developers think the rules and laws that apply to other companies and developers don’t apply to them. 

Read Jeff’s full statement here

@tinadaunt: Universal Music Exec Jeff Harleston Calls On Senate to Regulate AI: ‘Ensure Creators Are Respected and Protected’

 

Companies using artificial intelligence software are shamelessly ripping off artists from film and music, and it will get worse if not regulated, members of the entertainment industry told U.S. Senators at a hearing Wednesday.

“AI in the service of artists and creativity can be a very, very good thing,” executive vice president of business and legal affairs for Universal Music Group Jeffrey Harleston said. “But AI that uses or worse yet appropriates the work of these artists, their name, their image, their likeness, their voice, without authorization, without consent, simply is not a good thing. Congress needs to establish rules that ensure creators are respected and protected.”

Read the post on The Wrap

Jeff also said in his statement:

Long before an AI-generated recording imitating Drake and The Weeknd – both Universal Music artists – went viral and captured the attention of press and policymakers, UMG has been thinking about artificial intelligence. One of our companies, Ingrooves, has three patents in AI to assist with marketing independent artists. And AI has long been used as a tool in the studio: For example, Apple Logic Pro X to generate drum tracks, or Captain Plugins to generate chord progressions. We also use AI regularly as a tool to assist in creating Dolby Atmos immersive audio music. It’s a great technology when employed responsibly – and one that we and our artists use. 

However, we are before you today because generative AI is raising fundamental issues of responsibility in the creative industries and copyright space. Each day, troubling examples emerge. We know some generative AI engines have been trained on our copyrighted library of recordings and lyrics, image generators have been trained on our copyrighted cover art, and music generators have been trained on our copyrighted music, all without authorization. 

We have a robust digital music marketplace, and UMG has hundreds of legitimate partners who’ve worked with us to bring music to fans in a myriad of ways. Those companies and services properly obtained the rights they need to operate from UMG, or from the associated record labels and publishers. So, it’s unfathomable to think AI companies and developers think the rules and laws that apply to other companies and developers don’t apply to them. 

Read Jeff’s full statement here

Science Journal Nature Bars AI Generated Illustrations

Well that was only a matter of time.  Nature, one of the leading scientific journals in the world, has announced that it will not allow the use of generative AI images or video. (Thanks to Cynthia Turner for the catch.). 

I must say that the journal’s rationale for rejecting this latest stop in Silicon Valley’s newest bubble is a pretty concise statement of the criminality of the bubble riders:

Why are we disallowing the use of generative AI in visual content? Ultimately, it is a question of integrity. The process of publishing — as far as both science and art are concerned — is underpinned by a shared commitment to integrity. That includes transparency. As researchers, editors and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative AI tools do not provide access to their sources so that such verification can happen.

Then there’s attribution: when existing work is used or cited, it must be attributed. This is a core principle of science and art, and generative AI tools do not conform to this expectation. (Not to mention the Universal Declaration of Human Rights (article 27(2)) among other human rights instruments.)

Consent and permission are also factors. These must be obtained if, for example, people are being identified or the intellectual property of artists and illustrators is involved. Again, common applications of generative AI fail these tests.

Generative AI systems are being trained on images for which no efforts have been made to identify the source. Copyright-protected works are routinely being used to train generative AI without appropriate permissions. In some cases, privacy is also being violated — for example, when generative AI systems create what look like photographs or videos of people without their consent. In addition to privacy concerns, the ease with which these ‘deepfakes’ can be created is accelerating the spread of false information.

So that about sums it up. I would add that what Silicon Valley likes is the free-riding profit that is built in to failing to honor each of Nature’s objections aka what economists and tort lawyers call negative externalities.