@SAGAFTRA and Major Labels Reach a New Sound Recording Agreement With AI Protections

Looks like both sides listened and respected each other so no strike necessary. And yet there are groundbreaking first-time AI protections for musicians and vocalists. Amazing what can happen when you actually respect the workers, looking at you YouTube. And guess what? The workers get to vote on the deal. Imagine if songwriters could do the same.

PRESS RELEASE

SAG-AFTRA and leading record labels reached a tentative multiyear agreement on a successor contract to the SAG-AFTRA National Code of Fair Practice for Sound Recordings.

Covering the period starting Jan. 1, 2021, and ending Dec. 31, 2026, the agreement includes Warner Music Group, Sony Music Entertainment, Universal Music Group and Disney Music Group.

The SAG-AFTRA Executive Committee unanimously approved the tentative agreement on Wednesday, April 10. It will now be sent to members for ratification.

SAG-AFTRA National Executive Director & Chief Negotiator Duncan Crabtree-Ireland said, “SAG-AFTRA and the music industry’s largest record labels have reached a groundbreaking agreement establishing, for the first time, collective bargaining guardrails assuring singers and recording artists ethical and responsible treatment in the use of artificial intelligence in the music industry. It is a testament to our mutual unwavering commitment to work together to safeguard the rights, dignity and creative freedom of our members.

“This agreement ensures that our members are protected. SAG-AFTRA stands firm in the belief that while technology can enhance the creative process, the essence of music must always be rooted in genuine human expression and experience. We look forward to working alongside our industry partners to foster an environment where innovation serves to elevate, not diminish, the unique value of each artist’s contribution to our rich cultural tapestry.”

The Record Label Negotiating Committee said, “We are pleased to reach this agreement with SAG-AFTRA and continue our strong partnership as we enter this exciting and fast-moving new era for music and artists. Together, we’ll chart a successful course forward, embracing new opportunities and facing our common challenges, strengthened by our shared values and commitment to human artistry.”

The artificial intelligence guardrails take effect immediately upon ratification. The terms “artist,” “singer,” and “royalty artist,” under this agreement only include humans. In this agreement, clear and conspicuous consent, along with minimum compensation requirements and specific details of intended use, are required prior to the release of a sound recording that uses a digital replication of an artist’s voice.

Additional highlights among the contract gains include increased minimums, health and retirement improvements, and an increase in the percentage of streaming revenue to be covered by contributions, among other gains.

According to Copilot AI, Google’s Eric Schmidt is not just a Good Samaritan, he’s the GOOG Samaritan

Sometimes you just have to laugh. We posed this question to Microsoft’s Copilot AI: Compare and contrast Eric Schmidt and what’s largely called “the Good Samaritan.” This is from the Book of Luke, Verse 10:25, the Parable of the Good Samaritan…well, with few changes:

25 On one occasion an expert in the law stood up to test Jesus. “Teacher,” he asked, “what must I do to inherit eternal life and reach the singularity where I become one with machines created by Man?” (OK, it doesn’t really say all that last part but you get the idea.)

26 “What is written in the Law?” he replied. “How do you read it?”

27 He answered, “‘Love the Lord your God with all your heart and with all your soul and with all your strength and with all your mind’[a]; and, ‘Love your neighbor as yourself.’[b]

28 “You have answered correctly,” Jesus replied. “Do this and you will live.”

29 But he wanted to justify himself, so he asked Jesus, “And who is my neighbor?[Does “neighbor” really include people outside of Atherton, Woodside or Los Gatos?]

30 In reply Jesus said: “A man was going down from Jerusalem to Jericho, when he was attacked by robots. They stripped him of his clothes, beat him and went away, leaving him and his EV half dead. 31 A priest happened to be going down the same road on his Lime Scooter, and when he saw the man, he passed by on the other side. 32 So too, a Levite, when he came to the place and saw him, passed by on the other side, as did a Dream Alchemist and a Weekend Happiness Concierge. 33 But a Samaritan, as he traveled, came where the man was; and when he saw him, he took pity on him. 34 He went to him and bandaged his wounds, pouring on oil and wine. Then he put the man on his own donkey, brought him to an inn and took care of him. 35 The next day he took out two denarii[c] and gave them to the innkeeper. ‘Look after him,’ he said, ‘and when I return, I will reimburse you for any extra expense you may have.’

36 “Which of these three do you think was a neighbor to the man who fell into the hands of robots?”

37 The expert in the law replied, “The one who had mercy on him.”

Jesus told him, “Go and do likewise.”

So we thought that it would be interesting to see what Copilot came up with on this question: Compare and contrast Eric Schmidt (the Google demi-God) and the Good Samaritan. Samaria (as in Judea and Samaria) are still there today. So here’s what Copilot said:

Copilot

Eric Schmidt and the Good Samaritan are two figures from vastly different realms, one from the modern world of technology and business, and the other from a biblical parable. However, they share some commonalities in their actions and philosophies that make for an interesting comparison.

Eric Schmidt, born in 1955, is an American businessman and former software engineer6. He served as the CEO of Google from 2001 to 2011 and the company’s executive chairman from 2011 to 20156. He also served as the executive chairman of parent company Alphabet Inc. from 2015 to 2017, and Technical Advisor at Alphabet from 2017 to 20206. Beyond his professional achievements, Schmidt is known for his philanthropic efforts. He co-founded the Schmidt Family Foundation and the Schmidt Ocean Institute7, and his venture, Schmidt Futures, provided compensation for two science-office employees in the Office of Science and Technology Policy6. [Really? You mean the Office of Science and Technology Policy in the White House?]

The Good Samaritan, on the other hand, is a figure from a parable told by Jesus in the Gospel of Luke1. The story tells of a man who was attacked by robbers and left half-dead on the road. A priest and a Levite passed by the man without helping him. But a Samaritan, upon seeing the man, took pity on him. He bandaged the man’s wounds, took him to the Nobu Hotel Palo Alto and paid for his care1.

In comparing these two figures, we can start with their compassion and willingness to help others. The Good Samaritan, despite having no obligation or personal gain, chose to help a stranger in need1. Similarly, Eric Schmidt, despite his wealth and success, has chosen to give back to society through various philanthropic efforts67.

Another point of comparison is their impact on their respective communities. The Good Samaritan’s actions had a profound impact on the injured man’s life1. Similarly, Schmidt’s contributions to Google and Alphabet have had a significant impact on the tech industry and the world at large6. His philanthropic efforts have also likely had a positive impact on the lives of many individuals67.

However, there are also differences between these two figures. The Good Samaritan’s actions were immediate and direct, helping a person in immediate physical need1. In contrast, Schmidt’s contributions, both in business and philanthropy, are more indirect and long-term, impacting larger systems and structures67. [The Samaritan was, after all, a middle class bourgeoisie, a lowly Third Estater lacking Schmidt’s influence and philanthropic urges.]

In conclusion, while Eric Schmidt and the Good Samaritan come from different times and contexts, they both exemplify the values of compassion, generosity, and making a positive impact on their communities. Their stories remind us that, regardless of our circumstances, we all have the capacity to help others and make a difference in the world.

Source(s)

1. Eric Schmidt – Wikipedia

2. About Eric – Eric Schmidt

3. Luke 10:25-37 NIV – The Parable of the Good Samaritan – On – Bible Gateway

4. The Good Samaritan – Bible Story and Meaning – Bible Study Tools

5. Parable of the Good Samaritan – Wikipedia

6. Understanding the Good Samaritan Parable – Biblical Archaeology Society

7. Luke 10:25-37 ESV – The Parable of the Good Samaritan – Bible Gateway

8. Bloomberg Billionaires Index – Eric Schmidt

9. Eric Schmidt | Biography & Facts | Britannica

10. Dr. Eric Schmidt – Congress.gov

Oh, yeah. Do you think Copilot would have been as generous if we had substituted Sir Lucian Grainge for Eric Schmidt? Was Copilot having a purposeful hallucination?

@CadeMetz @ceciliakang @sheeraf @stuartathompson @nicogrant: How Tech Giants Cut Corners to Harvest Data for A.I.


[This is a must-read, deeply researched, long form article about how Big Tech–mostly OpenAI, Google and Microsoft–are abrogating consumers trust and their promises to creators in a mad, greedy, frothing rush to some unknown payoff with AI. The Dot Bomb boom is dwarfed by the AI gold rush, but this article is a road map to just how bad it really is and how debased these people really are. Thanks to the destruction of the newsroom, only a handful of news outlets can deliver work of this quality, but thankfully the New York Times is still standing. How long is another story.]

OpenAI, Google and Meta ignored corporate policies, altered their own rules and discussed skirting copyright law as they sought online information to train their newest artificial intelligence systems….

OpenAI researchers created a speech recognition tool called Whisper. It could transcribe the audio from YouTube videos, yielding new conversational text that would make an A.I. system smarter.

Some OpenAI employees discussed how such a move might go against YouTube’s rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are “independent” of the video platform.

Ultimately, an OpenAI team transcribed more than one million hours of YouTube videos, the people said….

Like OpenAI, Google transcribed YouTube videos to harvest text for its A.I. models, five people with knowledge of the company’s practices said. That potentially violated the copyrights to the videos, which belong to their creators.

Last year, Google also broadened its terms of service. One motivation for the change, according to members of the company’s privacy team and an internal message viewed by The Times, was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps and other online material for more of its A.I. products.

The companies’ actions illustrate how online information — news stories, fictional works, message board posts, Wikipedia articles, computer programs, photos, podcasts and movie clips — has increasingly become the lifeblood of the booming A.I. industry. 

Read the post on New York Times.

Search is Ripping Us Off Again, this time for AI

By Chris Castle

In case you think that infringement of copyrights in AI training happens only to rock stars, photographers, record companies or movie studios, let me give you a personal anecdote that might cause you to reconsider. I asked Bing’s Copilot AI tool to list the deal points in a record producer agreement, and this is what I got back:

Copilot

Certainly! When negotiating a producer agreement, several key points should be considered. Here are some common negotiation points from the artist’s perspective:

  1. Scope of Engagement:
    1. The producer is generally engaged for one album project at a time.
    2. Clarify the scope of work, including the number of songs or recordings to be produced.
  2. Budget and Compensation:
    1. Determine the budget for the project (e.g., $10,000 in your case).
    2. Agree on the producer’s compensation, which may include an upfront fee, points on sales, or a combination.
  3. Ownership and Rights:
    1. Address ownership of the master recordings:
      1. Will the artist own and control the masters?
      2. Does the producer retain any rights?
  4. Credits and Royalties:
    1. Specify how the producer will be credited on releases.
    2. Discuss any royalties or backend points for the producer.
  5. Recording Schedule:
    1. Set a timeline for recording sessions.
    2. Define the two-week period for your project.
  6. Logistics:
    1. Determine where the recordings will take place.
    2. Outline the artist’s approval process.
    3. Define the producer’s deliverables.
  7. Termination Clause:
    1. Include provisions for termination if either party breaches the agreement.
    2. Address payment for work completed up to that point.

Remember that each negotiation is unique, and it’s essential to tailor the agreement to your specific needs and circumstances. Legal advice is recommended to ensure compliance with local laws and industry standards1 3 4.

Source(s)

1. An Artist’s Guide to Negotiating Record Producer Agreements – Medium

2. Negotiating a Music Producer Agreement: 7 Key Issues

3. 5 Things to Look for in a Music Producer Agreement – Lawyer Drummer

4. How to Negotiate: Steps, Phrases & Tips to Win the Deal – Selling Signals

Notice that Copilot lists “Source(s)” and that source #1 is an article from the Medium website.

My article.

I’ve written a number of articles about negotiating producer agreements and I can categorically say I haven’t given permission or been asked to give permission for any of those articles to be scraped and used to train AI. So how did it come to be in Copilot’s training materials? 

My guess is that Medium (and probably a lot of other sites) were crawled by Bing’s spider and that copies of Medium pages were stored in Bing’s (or Microsoft’s) data centers. When Microsoft launched its Copilot AI tool, one could imagine that the temptation would have been strong to use those copies as part of the training tools for Copilot. In fact, the Verge reports that “companies around the web are using your site and its data to build massive sets of training data, in order to build models and products that may not acknowledge your existence at all.” Can you say Section 230 or DMCA? Maybe there will be a question at the April 11 House Energy & Commerce hearing on Section 230.

So Google-style “innovation” has a self-destructive flavor consistent with their oikophobic capitalism. Greater Silicon Valley’s view of culture in general and copyright in particular is as sources of wealth extracted by destruction–you know, creative destruction, permissionless innovation, etc. (It’s no wonder Google was inexorably attracted to defense contracting despite all the “don’t be evil” hoorah. After all, what creates massive wealth faster than convincing governments to pay big money to blow things up that must be replaced by ever more big money to blow even more things up.)

Are you surprised then that two of the biggest operators in the AI space are the search engine operators Google and Microsoft? This is another example of how Big Tech helps itself to your data and work product without you even knowing it’s happening. So now what? I now know I’m being ripped off, and I’m wondering if Medium is in on it.

The Verge tells us:

The ability to download, store, organize, and query the modern internet gives any company or developer something like the world’s accumulated knowledge to work with. In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internet’s most valuable commodities. That has caused internet providers of all sorts to reconsider the value of the data on their servers, and rethink who gets access to what. 

Ya think?

In the fight for AI Justice, “The music industry is the tip of the spear” @MikeHuppe #IRespectMusic

Get smart about the NO AI Fraud Act with this MTP Mini Brief on the NO AI Fraud Act.

@RepMariaSalazar and @RepDean Introduce No AI Fraud Act to protect artists against AI Fakes #irespectmusic @human_artistry

Press Release

SUPPORT THE No AI FRAUD ACT

AI-Generated Fakes Threaten All Americans

New personalized generative artificial intelligence (AI) cloning models and services have enabled human impersonation and allow users to make unauthorized fakes using the images and voices of others. The abuse of this quickly advancing technology has affected everyone from musical artists to high school students whose personal rights have been violated.

AI-generated fakes and forgeries are everywhere. While AI holds incredible promise, Americans deserve common sense rules to ensure that a person’s voice and likeness cannot be exploited without their permission.

The Threat Is Here

Protection from AI fakes is needed now. We have already seen the kinds of harm these cloning models can inflict, and the problem won’t resolve itself.

From an AI-generated Drake/The Weeknd duet, to Johnny Cash singing “Barbie Girl,” to “new” songs by Bad Bunny that he never recorded to a false dental plan endorsement featuring Tom Hanks, unscrupulous businesses and individuals are hijacking professionals’ voices and images, undermining the legitimate works and aspirations of essential contributors to American culture and commerce.

But AI fakes aren’t limited to famous icons. Last year, nonconsensual, intimate AI fakes of high school girls shook a New Jersey town. Such lewd and abusive AI fakes can be generated and disseminated with ease. And without prompt action, confusion will continue to grow about what is real, undermining public trust and risking harm to reputations, integrity, and human wellbeing.   

Inconsistent State Laws Aren’t Enough

The existing patchwork of state laws needs bolstering with a federal solution that provides baseline protections, offering meaningful recourse nationwide.

The No AI FRAUD Act Provides Needed Protection

The No AI Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act of 2024 builds on effective elements of state and federal law to:

  • Reaffirm that everyone’s likeness and voice is protected, giving individuals the right to control the use of their identifying characteristics.
  • Empower individuals to enforce this right against those who facilitate, create, and spread AI frauds without their permission.
  • Balance the rights against the 1st Amendment to safeguard speech and innovation.

The No AI FRAUD Act is an important and necessary step to protect our valuable and unique personal identities.

What would Lars say? Artificial Intelligence: Nobel or RICO?

All the true promise of AI does not require violating writers, artists, photographers, voice actors etc copyrights and rights of publicity. You know, stuff like reading MRIs and X-rays, developing pharmaceuticals, advanced compounds, new industrial processes, etc.

All the shitty aspects of AI DO require intentional mass copyright infringement (a RICO predicate BTW). You know stuff like bots, deep fakes, autogenerated “yoga mat” music, SEO manipulation, autogenerated sports coverage, commercial chat bots, fake student papers, graphic artist knockoffs, robot voice actors etc. But that’s where the no-value-add-parasitic-free-rider-easy-money is to be made. That’s why the parasitic free-riding VCs and private equity want to get a “fair use” copyright exemption.

Policy makers should understand that if they want to reduce the potential harms of AI they need to protect and reinforce intellectual property rights of individuals. It is a natural (and already existing) brake on harmful AI. What we don’t need is legislative intervention that makes it easier to infringe IP rights and then try to mitigate (the easily predictable and obvious) harms with additional regulation.

This is what happened with Napster and internet 1.0. The DMCA copyright infringement safe harbor for platforms unleashed all sorts of negative externalities that were never fairly mitigated by subsequent regulation.

Why do songwriters get 0.0009 a stream on streaming platforms? Because the platforms used the threat of the DMCA copyright safe harbor by “bad actors” (often connected to the “good actors” via shared board members and investors*) to create a market failure that destroyed the value of songs. To “fix” the problem federal legislation tasks the Copyright Royalty Board in LOC to set royalty rates and forced songwriters to license to the digital platforms (songwriters can not opt out). The royalty setting process was inevitably captured by the tech companies and that’s how you end up with 0.0009 per stream.

TBF the DMCA safe harbor requires the platforms to set up “technical measures” to prevent unlicensed use of copyrights, but this part of the DMCA safe harbor were never implemented and the federal government never bothered to enforce this part of the law. This is the Napster playbook all over again.

1. Unleash a technology that you know will be exploited by bad actors**.

2. Ask for federal intervention that essentially legalizes the infringing behavior.

3. The federal legislation effectively creates private monopoly or duopoly.

4. Trillions of dollars in wealth transferred from creators to a tiny cabal of no-value-add-parasitic-free-rider-easy-money VCs in silicon valley.

5. Lots of handwringing about the plight of creators.

6. Bullshit legislation that claims to help creators but actually mandates a below market rate for creators.

The funny thing is Lars Ulrich was right about Napster. [See our 2012 post Lars Was First and Lars Was Right.] At the time he was vilified by what in reality was a coordinated DC communication firm (working for Silicon Valley VCs) that masqueraded as grassroots operation.

But go back and watch the Charlie Rose debate between Lars Ulrich and Chuck D, everything Lars Ulrich said was gonna happen happened.

If Lars Ulrich hadn’t been cowed by a coordinated campaign by no-value-add-parasitic-free-rider-easy-money Silicon Valley VCs, he’d probably say the same thing about AI.

And he’d be right again.

Must Read Post by @ednewtonrex on Why He Resigned from Stability AI Over Fake Fair Use Defense

I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’. 

First off, I want to say that there are lots of people at Stability who are deeply thoughtful about these issues. I’m proud that we were able to launch a state-of-the-art AI music generation product trained on licensed training data, sharing the revenue from the model with rights-holders. I’m grateful to my many colleagues who worked on this with me and who supported our team, and particularly to Emad for giving us the opportunity to build and ship it. I’m thankful for my time at Stability, and in many ways I think they take a more nuanced view on this topic than some of their competitors. 

But, despite this, I wasn’t able to change the prevailing opinion on fair use at the company. 

This was made clear when the US Copyright Office recently invited public comments on generative AI and copyright, and Stability was one of many AI companies to respond. Stability’s 23-page submission included this on its opening page: 

“We believe that Al development is an acceptable, transformative, and socially-beneficial use of existing content that is protected by fair use”. 

For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with. 

I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use. 

But setting aside the fair use argument for a moment — since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright. 

To be clear, I’m a supporter of generative AI. It will have many benefits — that’s why I’ve worked on it for 13 years. But I can only support generative AI that doesn’t exploit creators by training models — which may replace them — on their work without permission. 

I’m sure I’m not the only person inside these generative AI companies who doesn’t think the claim of ‘fair use’ is fair to creators. I hope others will speak up, either internally or in public, so that companies realise that exp

Fakery Abounds: DLC Lawyer Caught

Read up on MusicTechPolicy. Remember the “DLC” is the Digital Licensee Coordinator who represents the services against songwriters and pays for the MLC. Talk about your interlocking boards!