@FTC to AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive

An important position paper from the Federal Trade Commission about AI:

You may have heard that “data is the new oil”—in other words, data is the critical raw material that drives innovation in tech and business, and like oil, it must be collected at a massive scale and then refined in order to be useful. And there is perhaps no data refinery as large-capacity and as data-hungry as AI. 

Companies developing AI products, as we have noted, possess a continuous appetite for more and newer data, and they may find that the readiest source of crude data are their own userbases. But many of these companies also have privacy and data security policies in place to protect users’ information. These companies now face a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users’ privacy….

It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy. (emphasis in original)…

The FTC will continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development. Ultimately, there’s nothing intelligent about obtaining artificial consent.

Read the post on FTC

Does it have an index? @LizPelly’s Must-Read Investigation in “Mood Machine” Raises Deep Questions About Spotify’s Financial Integrity

Spotify Playlist Editors

By Chris Castle

If you don’t know of Liz Pelly, I predict you soon will. I’ve been a fan for years but I really think that her latest work, Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, coming in January by One Signal Publishers, an imprint of Atria Books at Simon & Schuster, will be one of those before and after books. Meaning the world you knew before reading the book was radically different than the world you know afterward. It is that insightful. And incriminating.

We are fortunate that Ms. Pelly has allowed Harper’s to excerpt Mood Machine in the current issue. I want to suggest that if you are a musician or care about musicians, or if you are at a record label or music publisher, or even if you are in the business of investing in music, you likely have nothing more important to do today than read this taste of the future. 

The essence of what Ms. Pelly has identified is the intentional and abiding manipulation of Spotify’s corporate playlists. She explains what called her to write Mood Machine:

Spotify, the rumor had it, was filling its most popular playlists with stock music attributed to pseudonymous musicians—variously called ghost or fake artists—presumably in an effort to reduce its royalty payouts. Some even speculated that Spotify might be making the tracks itself. At a time when playlists created by the company were becoming crucial sources of revenue for independent artists and labels, this was a troubling allegation.

What you will marvel at is the elaborate means Ms. Pelly has discovered–through dogged reporting worthy of the great deadline artists–that Spotify undertook to deceive users into believing that playlists were organic. And, it must be said, to deceive investors, too. As she tells us:

For years, I referred to the names that would pop up on these playlists simply as “mystery viral artists.” Such artists often had millions of streams on Spotify and pride of place on the company’s own mood-themed playlists, which were compiled by a team of in-house curators. And they often had Spotify’s verified-artist badge. But they were clearly fake. Their “labels” were frequently listed as stock-music companies like Epidemic, and their profiles included generic, possibly AI-generated imagery, often with no artist biographies or links to websites. Google searches came up empty.

You really must read Ms. Pelly’s except in Harper’s for the story…and did I say the book itself is available for preorder now?

All this background manipulation–undisclosed and furtive manipulation by a global network of confederates–was happening while Spotify devoted substantial resources worthy of a state security operation into programming music in its own proprietary playlists. That programmed music not only was trivial and, to be kind, low brow, but also essentially at no cost to Spotify. It’s not just that it was free, it was free in a particular way. In Silicon Valley-speak, Ms. Pelly has discovered how Spotify disaggregated the musician from the value chain.

What she has uncovered has breathtaking implications, particularly with the concomitant rise of artificial intelligence and that assault on creators. The UK Parliament’s House of Commons Digital, Culture, Media & Sport Committee’s Inquiry into the Economics of Music Streaming quoted me as saying “If a highly trained soloist views getting included on a Spotify “Sleep” playlist as a career booster, something is really wrong.” That sentiment clearly resonated with the Committee, but was my feeble attempt at calling government’s attention to then-only-suspected playlist grift that was going on at Spotify. Ms. Pelly’s book is a solid indictment–there’s that word again–of Spotify’s wild-eyed, drooling greed and public deception. 

Ms. Pelly’s work raises serious questions about streaming payola and its fellow-travelers in the annals of crime. The last time this happened in the music business was with Fred Dannen’s 1991 book called Hit Men that blew the lid off of radio payola. That book also sent record executives running to unfamiliar places called “book stores” but for a particular reason. They weren’t running to read the book. They already knew the story, sometimes all too well. They were running to see if their name was in the index.

Like the misguided iHeart and Pandora “steering agreements” that nobody ever investigated which preceded mainstream streaming manipulation, it’s worth investigating whether Spotify’s fakery actually rises to the level of a kind of payola or other prosecutable offense. As the noted broadcasting lawyer David Oxenford observed before the rise of Spotify:

The payola statute, 47 USC Section 508, applies to radio stations and their employees, so by its terms it does not apply to Internet radio (at least to the extent that Internet Radio is not transmitted by radio waves – we’ll ignore questions of whether Internet radio transmitted by wi-fi, WiMax or cellular technology might be considered a “radio” service for purposes of this statute). But that does not end the inquiry.Note that neither the prosecutions brought by Eliot Spitzer in New York state a few years ago nor the prosecution of legendary disc jockey Alan Fried in the 1950s were brought under the payola statute. Instead, both were based on state law commercial bribery statutes on the theory that improper payments were being received for a commercial advantage. Such statutes are in no way limited to radio, but can apply to any business. Thus, Internet radio stations would need to be concerned.

Ms. Pelly’s investigative work raises serious questions of its own about the corrosive effects of fake playlists on the music community including musicians and songwriters. She also raises equally serious questions about Spotify’s financial reporting obligations as a public company.

For example, I suspect that if Spotify were found to be using deception to boost certain recordings on its proprietary playlists without disclosing this to the public, it could potentially raise issues under securities laws, including the Sarbanes-Oxley Act (SOX). SOX requires companies to maintain accurate financial records and disclose material information that could affect investors’ decisions.

Deceptive practices that mislead investors about the company’s performance or business practices could be considered a violation of SOX. Additionally, such actions could lead to investigations by regulatory bodies like the Securities and Exchange Commission (SEC) and potential legal consequences.

Publicly traded companies like Spotify are required to disclose “risk factors” in their public filings which are potential events that could significantly impact Spotify’s business, financial condition, or operations. Ms. Pelly’s reporting raises issues that likely should be addressed in a risk factor. Imagine that risk factor in Spotify’s next SEC filing? It might read something like this:


Risk Factor: Potential Legal and Regulatory Actions

Spotify is currently under investigation for alleged deceptive practices related to the manipulation of Spotify’s proprietary playlists. If these allegations are substantiated, Spotify could face significant legal and regulatory actions, including fines, penalties, and enforcement actions by regulatory bodies such as the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC). Such actions could result in substantial financial liabilities, damage to our reputation, and a loss of user trust, which could adversely affect our business operations and financial performance.


[A version of this post first appeared in MusicTech.Solutions]

DMCA Take Two: UK Government is to Propose Death Blow Opt-Out for AI Training

Americans are freedom loving people, and nothing says freedom like getting away with it.
Long Long Time, written by Guy Forsyth

Big Tech is jamming another safe harbor boondoggle through another government, this time for artificial intelligence. The defining feature of the DMCA scam is every artist in the known universe having to single-handedly monitor the entire Internet to catch each instance of theft in the act. Once caught, artists have to send a DMCA notice on a case by case basis, and then overcome what is 9 times out of 10 a BS counternotification. Then if they disagree with the BS counternotification, artists are faced with having to file a federal copyright infringement lawsuit which they don’t file because they can’t afford it.

And so it goes.

This is what an “opt-out” looks like. We have seen this movie before and we know how it ends–it’s called getting away with it. Let us be very clear with lawmakers: Notice and takedown and “opt out” is bullshit. It has never worked and has imposed a phenomenal cost on the artist community to the point that many if not most artists have just given up. The Future of Music Coalition and A2IM surveyed their members and determined that over half don’t even bother to look anymore because they can’t afford to run the search. The next largest group give up because they get no response from the notices.

Let’s understand–every time an artist gives up even looking for infringers, that’s a win for Big Tech. That’s why year after year, there are over a billion DMCA notices sent to a variety of infringers.

Ask yourself in all honesty, are you surprised? What head up the ass buffoon would ever think that an opt out would work? Unless the plan was to let Big Tech run wild and give both the biggest corporations in commercial history and the lawmakers a big fig leaf to cover up the theft?

That same approach is rearing its head again in both the US Congress and the UK. But this time it is being applied to artificial intelligence training and outputs. This is stark raving madness, drooling idiocy. At least with the DMCA an artist could look for an actual copy of their works that could be found by text-based search, audio fingerprints or just listening.

With AI, the whole point is to disguise the underlying work used to train the AI. The AI platform operator knows what works they used, which sites they scraped, or other ways to identify the infringed works. When sued, these operators have refused to disclose the training materials because they say that the sources of those materials are supposedly a trade secret and confidential.

Once a work is ingested into the AI, the output is also purposely distorted from the original. Again, impossible to conclusively identify. So what exactly are you opting out of? To whom do you send your little notice?

This entire opt-out idea is through the looking glass into the upside down world. Yet is is true.

The most current manifestation of this insanity is the UK government’s intention to pass legislation that would force artists to use an opt-out model, possibly on a work-by-work basis. And the worst part is that somehow they have been led to think that an opt-out is a protection for artists.

Orwellian.

Fortunately the UK government may seek public comment on this opt-out proposal. We will keep you posted on what the UK government actually proposes and how you can comment.

In the meantime, if you live in the UK, it’s not to early to contact your MP and ask them what the hell is going on. You may want to ask them why you can call the police when your car is being stolen but there’s nobody to call when your life’s work is being stolen. Particularly when the government protects the thieves.

It’s the Stock, Stupid:  Will the Centrifugal Force of the Public Market Nix the TikTok Divestment?

It’s a damn good thing we never let another MTV build a business on our backs.

In case you were wondering, the founder of TikTok’s parent corporation Bytedance is now reportedly China’s richest man according to the Hurun Rich List at a net worth of US$49.3 billion.  Is that because of “profits”?  Ah, no.  It’s due to his share of the Bytedance stock valuation. This is why any royalty deal with Big Tech that is based solely on a percentage of revenue rather than a dollar rate based on total value is severely lacking.

Revenue is a factor in determining stock valuation, of course.  ByteDance’s first-half 2024 revenue increased to $73 billion, making Bytedance’s revenues almost as big as Facebook but potentially growing faster. (Meta/Facbook’s first half  revenue increased about 25% to $75.5 billion.)

But where does TikTok’s revenue come from? ByteDance’s international revenue reached $17 billion in the first half of 2024, largely driven by TikTok. Non-China revenues for ByteDance rose by nearly 60% during this period. ByteDance continues to leverage TikTok to expand into international e-commerce, sustaining its global popularity. So the company is throwing off a pile of cash–yet they are unable to come up with a functioning royalty system.

Then what would a Bytedance IPO price at?  We kind of have to guess because Bytedance is not publicly traded and doesn’t report its financials to the public (and even if they did, China-based companies got special beneficial treatment during the Obama Administration so PRC companies haven’t reported on the same basis as everyone else until recently).  Continuing the Meta/Facebook comparison, Meta has a market capitalization of $1.4 trillion give or take, while ByteDance’s valuation on the secondary market for private stocks is about $250 billion, according to a CapLight subscriber. 

That gap is not lost on our friends at mega-venture capital firm Sequoia China and other influential investors in Bytedance such as Susquehanna,  SoftBank, and  General Atlantic.  And, of course, the Chinese Communist Party investing through its Cyberspace Administration of China censorship operation. The CCP’s CAC owns elite “golden shares” in Bytedance that allows it to name directors to the board.  These cats did not put up cold hard cash for a distress asset sale of Bytedance’s principal operating unit aka TikTok.  

Assuming a constant growth rate, Bytedance is trading at a paltry 1.7 times its 2024 revenues compared to Meta which is trading at about 8.7x its revenues.  There are some differences between Meta and Bytedance, like operating profits:  Meta has a 38% operating margin compared to Bytedance at about 25%.  But we all know why Bytedance’s valuation is depressed—the TikTok divestment which seems to be on track to happen on or about January 19.

The Protecting Americans from Foreign Adversary Controlled Applications Act aka the TikTok Divestment Act, requires that Bytedance must sell TikTok.  There’s a pretty good argument that the divestment is enforceable for a variety of reasons.  The law applies not only to TikTok, but also to any entity controlled by China, Iran, North Korea or Russia that distributes an application in the United States.  That’s a pretty significant barrier to IPO riches, or at least one major risk factor that could sour underwriters if not investors.  How to get around it?

As we saw with the Music Modernization Act that solved Spotify’s IPO issues due to the company’s massive copyright infringement business model, if you spread enough cash around Capitol Hill, it’s astonishing what can happen with the vast number of people on the take.  Whatever it costs, lobbyists and lawmakers are cheap dates compared to IPO riches.  Even so, it doesn’t look like the US government is quite ready to allow one of the biggest foreign agent data harvesting and user profiling operations in history to get its snout in the public markets trough.  At least not yet.

But an argument could be made that Bytedance is missing about $1 trillion in market cap.  Greed and resentment are a powerful combination.  To add insult to injury, even Triller managed to get to the public markets, so things could start to get weird while Mr. Tok watches his paper billions evaporate on January 19.

[This post first appeared on MusicTech.Solutions]

Updates for Nov. 20 @ArtistRights Symposium at @AmericanU @KogodBiz in Washington DC

We are announcing the time schedule and speakers for the 4th annual Artist Rights Symposium on November 20. The symposium is supported by the Artist Rights Institute and was founded by Dr. David C. Lowery, Lecturer at the University of Georgia Terry College of Business.

This year the symposium is hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  We are also pleased to have a Kogod student presentation on speculative ticketing as part of the speaker lineup.

Admission is free, but please reserve a spot with Eventbrite, seating is limited!

The symposium starts at 8:30 am and ends with a reception at 4:30pm. The symposium will be recorded as an audiovisual presentation for distribution at a later date, but will not be live-streamed. If you attend, understand that you may be filmed in any audience shots, questions from the floor or still images. The symposium social media hashtag is #ArtistRightsKogod.

Schedule

8:30 — Doors open, networking coffee.

9:00-9:10 — Welcome remarks by David Marchick, Dean, Kogod School of Business

9:10-9:15 — Welcome remarks by Christian L. Castle, Esq., Director, Artist Rights Institute

9:15-10:15 — THE TROUBLE WITH TICKETS:  The Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas

10:15-10:30: NIVA Speculative Ticketing Project Presentation by Kogod students

10:30-10:45: Coffee break

10:45-11:00: OVERVIEW OF CURRENT ISSUES IN ARTIFICIAL INTELLIGENCE LITIGATION: Kevin Madigan, Vice President, Legal Policy and Copyright Counsel, Copyright Alliance

11:00-12 pm: SHOW ME THE CREATOR – Transparency Requirements for AI Technology:

Danielle Coffey, President & CEO, News Media Alliance, Arlington, Virginia
Dahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington, DC
Ken Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DC 

Moderator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

12:00-12:30: Lunch break

12:30-1:30: Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.

1:30-1:45: Coffee break

1:45-2:45: CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It

Richard James Burgess, MBE, President & CEO, American Association of Independent Music, New York
Helienne Lindvall, President, European Composer & Songwriter Alliance, London, England
Abby North, President, North Music Group, Los Angeles
Anjula Singh, Chief Financial Officer and Chief Operating Officer, SoundExchange, Washington DC

Moderator:  Christian L. Castle, Esq, Director, Artist Rights Institute, Austin, Texas

2:45-3:15: Reconvene across street to International Service Founders Room for concluding speakers and reception

3:15-3:30: OVERVIEW OF INTERNATIONAL ARTIFICIAL INTELLIGENCE LEGISLATION: George York, Senior Vice President International Policy from RIAA.

3:30-4:30: NAME, IMAGE AND LIKENESS RIGHTS IN THE AGE OF AI:  Current initiatives to protect creator rights and attribution

Jeffrey Bennett, General Counsel, SAG-AFTRA, Washington, DC
Jen Jacobsen, Executive Director, Artist Rights Alliance, Washington DC
Jalyce E. Mangum, Attorney-Advisor, U.S. Copyright Office, Washington DC

Moderator
John Simson, Program Director Emeritus, Business & Entertainment, Kogod School of Business, American University

4:30-5:30: Concluding remarks by Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program and reception.

Now with added retroactive acrobatics: @DamianCollins calls on UK Prime Minister to stop Google’s “Text and Data Mining” Circus

By Chris Castle

Damian Collins (former chair of the UK Parliament’s Digital Culture Media and Sport Select Committee) warns of Google’s latest artificial intelligence shenanigans in a must-read opinion piece in the Daily Mail. Mr. Collins highlights Google’s attempt to lobby its way into what is essentially a retroactive safe harbor to protect Google and its confederates in the AI land grab. (Safe harbors aka pirate utopias.)

While Mr. Collins writes about Google’s efforts to rewrite the laws of the UK to free ride in his home country which is egregious bullying, the episode he documents is instructive for all of us. If Google & Co. will do it to the Mother of Parliaments, it’s only a matter of time until Google & Co. do the same everywhere or know the reason why. Their goal is to hoover up all the world’s culture that the AI platforms have not scraped already and–crucially–to get away with it. And as Austin songwriter Guy Forsyth says, “…nothing says freedom like getting away with it.”

The timeline of AI’s appropriation of all the world’s culture is a critical understanding to appreciate just how depraved Big Tech’s unbridled greed really is. The important thing to remember is that AI platforms like Google have been scraping the Internet to train their AI for some time now, possibly many years. This apparently includes social media platforms they control. My theory is that Google Books was an early effort at digitization for large language models to support products like corpus machine translation as a predecessor to Gemini (“your twin”) and other Google AI products. We should ask Ray Kurzweil.

There is starting to be increasing evidence that this is exactly what these people are up to. 

The New York Times Uncovers the Crimes

According to an extensive long-form report in the New York Times by a team of very highly respected journalists, it turns out that Google has been planning this “Text and Data Mining” land grab for some time. At the very moment YouTube was issuing press releases about their Music AI Incubator and their “partners”–Google was stealing anything that was not nailed down that anyone had hosted on their massive platforms, including Google Docs, Google Maps, and…YouTube. The Times tells us:

Google transcribed YouTube videos to harvest text for its A.I. models, five people with knowledge of the company’s practices said. That potentially violated the copyrights to the videos, which belong to their creators….Google said that its A.I. models “are trained on some YouTube content,” which was allowed under agreements with YouTube creators, and that the company did not use data from office apps outside of an experimental program. 

I find it hard to believe that YouTube was both allowed to transcribe and scrape under all its content deals, or that they parsed through all videos to find the unprotected ones that fall victim to Google’s interpretation of the YouTube terms of use. So as we say in Texas, that sounds like bullshit for starters. 

How does this relate to the Text and Data Mining exception that Mr. Collins warns of? Note that the NYT tells us “Google transcribed YouTube videos to harvest text.” That’s a clue.

As Mr. Collins tells us: 

Google [recently] published a policy paper entitled: Unlocking The UK’s AI Potential.

What’s not to like?, you might ask. Artificial intelligence has the potential to revolutionise our economy and we don’t want to be left behind as the rest of the world embraces its benefits.

But buried in Google’s report is a call for a ‘text and data mining’ (TDM) exception to copyright. 

This TDM exception would allow Google to scrape the entire history of human creativity from the internet without permission and without payment.

And, of course, Mr. Collins is exactly correct, it’s safe to assume that’s exactly what Google have in mind. 

The Conspiracy of Dunces and the YouTube Fraud

In fairness, it wasn’t just Google ripping us off, but Google didn’t do anything to stop it as far as I can tell. One thing to remember is that YouTube was, and I think still is, not very crawlable by outsiders. It is almost certainly the case that Google would know who was crawling youtube.com, such as Bingbot, DuckDuckBot, Yandex Bot, or Yahoo Slurp if for no other reason that those spiders were not googlebot. With that understanding, the Times also tells us:

OpenAI researchers created a speech recognition tool called Whisper. It could transcribe the audio from YouTube videos, yielding new conversational text that would make an A.I. system smarter.

Some OpenAI employees discussed how such a move might go against YouTube’s rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are “independent” of the video platform. [Whatever “independent” means.]

Ultimately, an OpenAI team transcribed more than one million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI’s president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4, which was widely considered one of the world’s most powerful A.I. models and was the basis of the latest version of the ChatGPT chatbot….

OpenAI eventually made Whisper, the speech recognition tool, to transcribe YouTube videos and podcasts, six people said. But YouTube prohibits people from not only using its videos for “independent” applications, but also accessing its videos by “any automated means (such as robots, botnets or scrapers).” [And yet it happened…]

OpenAI employees knew they were wading into a legal gray area, the people said, but believed that training A.I. with the videos was fair use. [Or could they have paid for the privilege?]

And strangely enough, many of the AI platforms sued by creators raise “fair use” as a defense (if not all of the cases) which is strangely reminiscent of the kind of crap we have been hearing from these people since 1999.

Now why might Google have permitted OpenAI to crawl YouTube and transcribe videos (and who knows what else)? Probably because Google was doing the same thing. In fact, the Times tells us:

Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn’t stop OpenAI because Google had also used transcripts of YouTube videos to train its A.I. models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said.

So Google and its confederate OpenAI may well have conspired to commit massive copyright infringement against the owner of a valid copyright, did so willingly, and for purposes of commercial advantage and private financial gain. (Attempts to infringe are prohibited to the same extent as the completed act). The acts of these confederates vastly exceed the limits for criminal prosecution for both infringement and conspiracy.

But to Mr. Collins’ concern, the big AI platforms transcribed likely billions of hours of YouTube videos to manipulate text and data–you know, TDM.

The New Retroactive Safe HarborThe Flying Googles Bring their TDM Circus Act to the Big Tent With Retroactive Acrobatics

But also realize the effect of the new TDM exception that Google and their Big Tech confederates are trying to slip past the UK government (and our own for that matter). A lot of the discussion about AI rulemaking acts as if new rules would be for future AI data scraping. Au contraire mes amis–on the contrary, the bad acts have already happened and they happened on an unimaginable scale.

So what Google is actually trying to do is get the UK to pass a retroactive safe harbor that would deprive citizens of valuable property rights–and also pass a prospective safe harbor so they can keep doing the bad acts with impunity.

Fortunately for UK citizens, the UK Parliament has not passed idiotic retroactive safe harbor legislation like the U.S. Congress has. I am, of course, thinking of the vaunted Music Modernization Act (MMA) that drooled its way to a retroactive safe harbor for copyright infringement, a shining example of the triumph of corruption that has yet to be properly challenged in the US on Constitutional grounds. 

There’s nothing like the MMA absurdity in the UK, at least not yet. However, that retroactive safe harbor was not lost on Google, who benefited directly from it. They loved it. They hung it over the mantle next to their other Big Game trophy, the DMCA. And now they’d like to do it again for the triptych of legislative taxidermy.

Because make no mistake–a retroactive safe harbor would be exactly the effect of Google’s TDM exception. Not to mention it would also be a form of retroactive eminent domain, or what the UK analogously might call the compulsory purchase of property under the Compulsory Purchase of Property Act. Well…”purchase” might be too strong a word, more like “transfer” because these people don’t intend to pay for a thing.

The effect of passing Google’s TDM exception would be to take property rights and other personal rights from UK citizens without anything like the level of process or compensation required under the Compulsory Purchase of Property–even when the government requires the sale of private property to another private entity (such as a railroad right of way or a utility easement).

The government is on very shaky ground with a TDM exception imposed by the government for the benefit of a private company, indeed foreign private companies who can well afford to pay for it. It would be missing government oversight on a case-by-base basis, no proper valuation, and for entirely commercial purposes with no public benefit. In the US, it would likely violate the Takings Clause of our Constitution, among other things.

It’s Not Just the Artists

Mr. Collins also makes a very important point that might get lost among the stars–it’s not just the stars that AI is ripping off–it is everyone. As the New York Times story points out (and it seems that there’s more whistleblowers on this point every day), the AI platforms are hoovering up EVERYTHING that is on the Internet, especially on their affiliated platforms. That includes baby videos, influencers, everything.

This is why it is cultural appropriation on a grand scale, indeed a scale of depravity that we haven’t seen since the Nurenberg Trials. A TDM exception would harm all Britons in one massive offshoring of British culture.

[This post first appeared on MusicTech.Solutions]

CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It: Speaker Update for Nov. 20 @ArtistRights Symposium at @AmericanU @KogodBiz in Washington DC

We’re pleased to announce additional speakers for the 4th annual Artist Rights Symposium on November 20, this year hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  The symposium is also supported by the Artist Rights Institute and was founded by Dr. David Lowery, Lecturer at the University of Georgia Terry College of Business.

The Symposium has four panels and a lunchtime keynote. Panels will begin at 8:30 am and end by 5 pm, with lunch and refreshments. More details to follow. Contact the Artist Rights Institute for any questions.

Admission is free, but please reserve a spot with Eventbrite, seating is limited! (Eventbrite works best with Firefox)

Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.  Graham will speak around lunchtime.

We have confirmed speakers for another topic! 

CHICKEN AND EGG SANDWICH:  Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About It

Richard James Burgess, MBE, President & CEO, American Association of Independent Music, New York
Helienne Lindvall, President, European Composer & Songwriter Alliance, London, England
Abby North, President, North Music Group, Los Angeles
Anjula Singh, Chief Financial Officer and Chief Operating Officer, SoundExchange, Washington DC

Moderator:  Christian L. Castle, Esq, Director, Artist Rights Institute, Austin, Texas

Previously confirmed panelists are:

SHOW ME THE CREATOR – Transparency Requirements for AI Technology:

Danielle Coffey, President & CEO, News Media Alliance, Arlington, Virginia
Dahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington, DC
Ken Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DC 

Moderator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

THE TROUBLE WITH TICKETS:  The Economics and Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas

SHOW ME THE CREATOR – Transparency Requirements for AI Technology: Speaker Update for Nov. 20 @ArtistRights Symposium at @AmericanU @KogodBiz in Washington DC

We’re pleased to announce more speakers for the 4th annual Artist Rights Symposium on November 20, this year hosted in Washington, DC, by American University’s Kogod School of Business at American’s Constitution Hall, 4400 Massachusetts Avenue, NW, Washington, DC 20016.  The symposium is also supported by the Artist Rights Institute and was founded by Dr. David Lowery, Lecturer at the University of Georgia Terry College of Business.

The four panels will begin at 8:30 am and end by 5 pm, with lunch and refreshments. More details to follow. Contact the Artist Rights Institute for any questions.

Admission is free, but please reserve a spot with Eventbrite, seating is limited! (Eventbrite works best with Firefox)

Keynote: Graham Davies, President and CEO of the Digital Media Association, Washington DC.  Graham will speak around lunchtime.

We have confirmed speakers for another topic! 

SHOW ME THE CREATOR – Transparency Requirements for AI Technology:

Danielle Coffey, President & CEO, News Media Alliance, Arlington, Virginia
Dahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington, DC
Ken Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DC 

Moderator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program

Previously announced:

THE TROUBLE WITH TICKETS:  The Economics and Challenges of Ticket Resellers and Legislative Solutions:

Kevin Erickson, Director, Future of Music Coalition, Washington DC
Dr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia
  Terry College of Business, Athens, Georgia
Stephen Parker, Executive Director, National Independent Venue Association, Washington DC
Mala Sharma, President, Georgia Music Partners, Atlanta, Georgia

Moderator:  Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas

Are Your Hard Drives Turning to Bricks? Steve Harvey Takes Us Inside Iron Mountain

By Chris Castle

One day shortly after the sale to PolyGram, I got a call from Cheryl Engels with a problem she needed help with.  Cheryl at the time was A&M’s post-production director in mastering.  Among other things Cheryl supervised our audio assets storage room (which was mostly tape assets at the time) and also supervised mastering of new releases for other labels and artists such as U2.

Cheryl had received a call from some putz in PolyGram Special Markets demanding our best quality masters  be shipped to some place in New Jersey to be made into yet another stupid compilation record to be sold as God knows what kind of tchotchke.  In other words, our recordings were going to be used for the sole purpose of commoditizing music and being yet another place outside of A&M where our artist’s recordings could be purchased.

Cheryl had told this putz that he didn’t need our precious masters and that she’d be happy to run him off a DAT for his one track.  So why was she calling me?  Because of what he said next: “We’re the parent and you’re the child and the child doesn’t tell the parent what to do.”

I said just leave this with me.  I called the guy and said, “Hi, my name is Chris Castle.  You don’t know me but I’m calling to explain to you why you’re not getting what you want, Mr. SVP of Bullshit.  So first thing, I’ve been to your tape storage facility—are you going to store our masters by the broken water pipe or the space heaters?  Near the open window or in the car park?  And when would we get our tape back?”

After some back and forth, he accepted that this time it was different at least with A&M.  He apparently thought that Cheryl had been difficult with him.  I explained to him that he just needed to learn how things were done.  I explained to him that in Cheryl’s area the way things were done was the way Cheryl wanted them to be done—because she was correct.  I suggested to him that it worked for Herb Alpert, Bono, Sting and many other top artists and mastering engineers so maybe it could work for him, too.  And more importantly for him at that moment, it worked for me and I was backing Cheryl 100% with no daylight.

In signing off, I said, “and by the way, if we have a “parent” at A&M, his name is Jerry Moss and I’d be happy to transfer you to him right now if you have any questions.”

And as they say, that was that.  Another thing about Cheryl was that she kept track of the tape library which means that she knew where all of the original rolls and rolls and rolls of audio tape were that included outtakes, rough mixes, etc., etc., that are created as part of making a record, especially a high profile record.

Even with Cheryl Engels TLC, the media eventually wear out, whether it’s sticky shed syndrome for magnetic tape, or other degrading phenomenon for hard drives.

Steve Harvey writing in Mix Magazine has a very serious wakeup call coming our way:

[F]or the past 25 or more years, the music industry has been focused on its magnetic tape archives, and on the remediation, digitization and migration of assets to more accessible, reliable storage. Hard drives also became a focus of the industry during that period, ever since the emergence of the first DAWs in the late 1980s. But unlike tape, surely, all you need to do, decades later, is connect a drive and open the files. Well, not necessarily. And Iron Mountain would like to alert the music industry at large to the fact that, even though you may have followed recommended best practices at the time, those archived drives may now be no more easily playable than a 40-year-old reel of Ampex 456 tape.

This is why post production directors like Cheryl Engels were so insistent about quality control for the last 30 years.  The problem came up when we were working on a lot of 5.1 mixes in the 2002 era and it’s coming up again with immersive as Steve Harvey points out.  It will keep coming up as new mixing techniques required going back to the original multitracks. And 5.1 emulation is not the same as true 5.1.

Read Steve Harvey’s article—it’s very important to prepare for hard drive hell. Thankfully, Iron Mountain has some techniques up the sleeve to help, but trust me we are way past baking tapes in a hard drive reality.  For unlike a magnetic recording that at least might allow one pass over the tape heads to transfer it to a new storage medium, hard drives may end up just being bricks.  Assuming that tape wasn’t stored under a dripping water pipe in a basement, parents and children being what they are.

Read the post on Mix Magazine

[This post first appeared on Artist Rights Watch]

Astroturf Spotting: “The People’s Bid for TikTok”

We’ve had a pretty good track record over years of spotting astroturf operations from the European Copyright Directive to ad-supported piracy. Here’s what we believe is the latest–“the People’s Bid for TikTok,” pointed out to us by one of our favorite artists.

The first indication that something is fake–we call these “clues”–is in the premise of the campaign. Remember that the key asset of TikTok is the company’s algorithm. That algorithm is apparently responsible for curating the content users see on their feeds. This algorithm is highly sophisticated and is considered a key factor in TikTok’s success. The U.S. government has argued that the algorithm could be manipulated by the government of the People’s Republic of China to influence what messaging is promoted or suppressed.

In April, President Joe Biden signed a law requiring TikTok’s PRC-based parent company ByteDance to sell TikTok or face a ban in the U.S. by mid-January 2025. This law was the culmination of years of Congressional scrutiny and debate over the app’s potential risks.

At the core of President Biden’s concerns about TikTok is the algorithm. Not surprisingly, the People’s Republic of China has made it very clear that the algorithm is not for sale. This position was confirmed when TikTok itself admitted that the Chinese government would not allow the sale of its algorithm. China’s Commerce Minister Wang Wentao indicated that officials would seek to block any transfer of the app’s technology, stating that the country would “firmly oppose” a forced sale. That likely means that even if ByteDance were to sell TikTok–to “the people” or otherwise–the algorithm would remain under Chinese control, which undermines the U.S. government’s objective.

So–who is behind the “People’s Bid” since given that the “People’s Bid” seems to be making a proposal that will only be acceptable to the People’s Republic of China? We say that because of this FAQ on the People’s Bid site disclaiming any interest in acquiring the algorithm that PRC has essentially claimed as a state secret for some reason:

The People’s Bid has no interest in acquiring TikTok’s algorithm [which is nice since the algo is not for sale]. This is not an attempt to rinse and repeat the formula that has allowed Big Tech companies to reap enormous profits by scraping and exploiting user data. The People’s Bid will ensure that TikTok users control their data and experience by using the app on a rebuilt digital infrastructure that gives more power to users.

Oh no, The People’s Bid has no interest in that tacky algorithm which wasn’t for sale anyway. Good of them. So who is “them”? It appears, although it isn’t quite clear, that the entity doing the acquiring isn’t “The People’s Bid” at all, it’s something called “Project Liberty.”

The FAQ tells us a little bit about Project Liberty:

Project Liberty builds solutions that help people take back control of their digital lives. This means working to ensure that everyone has a voice, choice, and stake in the future of the Internet. Project Liberty has invested over half a billion dollars to develop infrastructure and alliances that will return power to the people.

They kind of just let that “half a billion dollars” drop in the dark of the FAQ. What that tells us is that somebody has a shit-ton of money who is interested in stopping the TikTok ban. So who is involved with this “Project Liberty”? The usual suspects, starting with Lawrence Lessig, Jonathan Zittrain and a slew of cronies from Berkman, Stanford, MIT, etc. Color us shocked, just shocked.

But these people never spend their own money and probably aren’t working for free, so who’s got the dough? Someone who doesn’t seem to care about acquiring the TikTok algorithm from the Chinese Communist Party?

Forbes tells us that this transaction is just a little bit different than what “The People’s Bid” or even the “Liberty Project” would have you believe if all you knew about it was from information on their website. The money seems to be coming in part, maybe in very large part, from one Frank McCourt whom you may remember as a former owner of the Los Angeles Dollars…sorry, Dodgers. In fairness, McCourt isn’t exactly making his plans a secret. He had his Project Liberty issue a press release as “The People’s Bid for TikTok”, which is actually Frank McCourt’s bid for TikTok as far as we can tell and as reported by Forbes:

Billionaire investor and entrepreneur Frank McCourt is organizing a bid to buy TikTok through Project Liberty, an organization to which he’s pledged $500 million that aims to fight for a safe, healthier internet where user data is owned by users themselves rather than by tech giants like TikTok parent ByteDance, Meta and Alphabet.

That’s more like it. We knew there was a sugar daddy in there somewhere. That’s much more in the Lessig style. Big favor, little bad mouth.

Of course, users owning their data is not the entire story by a long shot. Authors owned their books and Google still used the vast Google Books project to train AI.

Forbes adds this insight about Mr. McCourt:

Best known as the former owner of the Los Angeles Dodgers, McCourt spent most of the past decade focused on investing the approximately $850 million in proceeds from the team’s 2012 sale via his company McCourt Global. 

He sprinkled money into sports, real estate, technology, media and an investment firm focused on private credit. In January 2023, McCourt stepped down as CEO of McCourt Global to focus on Project Liberty but remains executive chairman and 100% owner. 

McCourt’s assets are worth an estimated $1.4 billion, landing him on Forbes’ billionaires list for the first time this year—though his wealth is a far cry from the estimated $220 billion valuation of ByteDance.

Which brings us to ByteDance. Is there another Silicon Valley money funnel with an interest in ByteDance? One is Sequoia Capital, which was also an original investor in Google which was an original investor in Professor Lessig and his various enterprises including Creative Commons. Sequoia’s ByteDance investment came in the form of one Neil Shen who runs Sequoia’s China operation recently spun off from the mothership. If you don’t recognize Neil Shen, he’s the former member (until 2023) of the Chinese People’s Political Consultative Conference, an arm of the Chinese Communist Party and its United Front Work operation. (According to a Congressional investigative report, The United Front operation is a strategic effort to influence and control various groups and individuals both within China and internationally. This strategy involves a mix of engagement, influence activities, and intelligence operations aimed at shaping political environments to favor the CCP’s interests. United front work includes “America Changle Association, which housed a secret PRC police station in New York City that was raided by the FBI in October 2022.”)

)

In plainer terms, it’s about the money. According to CNN:

McCourt said he is working with the investment firm Guggenheim Securities and the law firm Kirkland & Ellis to help assemble the bid, adding that the push is backed by Sir Tim Berners-Lee, the inventor of the World Wide Web [OMG, it must be legit!].

McCourt joins a host of other would-be suitors angling to pick up a platform used by 170 million Americans. Former Treasury Secretary Steven Mnuchin announced in March he’s assembling a bid, as well as Kevin O’Leary, the Canadian chairman of the private venture capital firm O’Leary Ventures.

TikTok, meanwhile, has indicated that it’s not for sale and the company has instead begun to mount a fight against the new law. The company sued to block the law earlier this month, saying that spinning off from its Chinese parent company is not feasible and that the legislation would lead to a ban of the app in the United States starting in January of next year.

But it’s the people‘s bid, right? Don’t be evil, ya’ll.

Let’s boil it down: TikTok would have been, up until President Biden signed the sell-or-ban bill into law, a HUGE IPO. It’s also a big chunk of ByteDance’s valuation, which means it’s a big chunk of Neil Shen’s carried interest in all likelihood. TikTok is no longer a huge IPO, in fact, it probably won’t be an IPO at all in its current configuration, particularly since the CCP has told the world that TikTok doesn’t own its core asset, the very algorithm that has so many people addicted (and addiction which is what a buyer is really buying).

So the astroturf is not the Liberty Project of the People’s Bid. Whatever “the People’s Bid” really is, it’s much more likely to be as the financial press has described it–Frank McCourt’s bid. But only for the most high-minded and pure-souled reasons.

It’s about the money. Stay tuned, we’ll be keeping an eye on this one.