Say No to Suno

Late last year, thieves disguised as construction workers broke into the Louvre during broad daylight, grabbed more than $100 million worth of crown jewels, and roared off on their motorbikes into the busy streets of Paris. While some of those thieves were later arrested, the jewelry they stole has yet to be recovered, and many fear those historic works of artistry have already been recut, reset, and resold.

Closer to home, but no less nefarious, is the brazen rip-off of artists enabled by irresponsible AI, whose profiteers are recutting, remixing, and reselling original works of artistry as something new.  The hijacking of the world’s entire treasure-trove of music floods platforms with AI slop and dilutes the royalty pools of legitimate artists from whose music this slop is derived. 

Meanwhile, those who are promoting this new business model are operating in broad daylight, too – minus the yellow safety vests.  That is AI music company Suno, the brazen “smash and grab” platform whose “Make it Music” ad campaign suggests that the most personal and meaningful forms of music can now be fabricated by their unauthorized AI platform machinery trained on human artists’ work. 

How significant is this activity?  Publicly revealed data says Suno is used to generate 7 million tracks a day, a massive quantity that suggests a dominant market share of AI tracks.  According to recent reports, Deezer “deems 85% of streams of fully AI-generated tracks [on its service] to be fraudulent,” and that such tracks include outputs from major generative models.  As JP Morgan’s analysts said, Deezer’s data “should be indicative of the broader market.”  Suno has yet to demonstrate persuasively that its platform does not, in practice, serve as a scalable input into streaming-fraud schemes — raising a serious concern that Suno has, in effect, become a fraud-fodder factory on an industrial scale.

In a February 2 LinkedIn post, Paul Sinclair, Suno’s Chief Music Officer, claims that his company’s platform is about “empowerment” that enables “billions of fans to create and play with music.”  He argues that closed systems are “walled gardens” that deny people access to the full joy of music.

Ironically, Sinclair’s choice of analogy undermines his own argument.  Ask yourself: just why are most gardens surrounded by fences or walls?  To keep out rabbits, deer, raccoons and wild pigs seeking a free lunch.  We cultivate, nurture and protect our gardens precisely because that makes them much more productive over the long run.

While Sinclair may be loath to admit it, AI is fundamentally different from past disruptive innovations in the music industry.  The phonograph, cassettes, CDs, MP3s, downloads, streaming – all these technologies were about the reproduction and distribution of creative work.  By contrast, irresponsible AI like Suno appropriates and plunders such creative work while undermining the commercial ecosystem for artists.

Think back to the days of Napster.  What brought the music industry back from the ruinous abyss of unfettered digital piracy?  It was the very “closed systems” that Sinclair derides as exclusionary.  At least streaming platforms maintain access controls and content management systems that enable creator compensation, even if the economic outcomes for many creators remain inadequate.  Should we be against Apple Music, Spotify, Deezer, YouTube Music, and Amazon Music?  What about Netflix, Disney+ and HBO, too, while we’re at it?

At its core, Sinclair’s argument is just a tired remix of the old trope that “information wants to be free.”  What that really means is: “We want your music for free.”

Artists need to understand Suno’s game.  They are not putting technology in the service of artists; they are putting artists in the service of their technology.  Every time artists’ creations are used by the platform, those creations have just unwittingly been contributed to the creation of endless derivatives of artists’ own work, not to mention AI slop, with limited or no remuneration back to the human creators.  Suno built its business on our backs, scraping the world’s cultural output without permission, then competing against the very works exploited.

It’s also important to keep in mind that using Suno to generate audio output calls into question the copyrightability of whatever Suno creates.  Most countries around the world including the US Copyright Office have been clear that generative AI outputs are largely ineligible for a copyright – meaning the economic value of the Suno creation lies solely with Suno, not with the artist using it.  The only ones gaining empowerment from Suno are Suno themselves.

Many in our community are embracing responsible AI as a tool for creation, and as a means for fans to explore and interact with our artistry.  That’s wonderful.  But it’s not the same as creating an environment where AI-generated works sourced from our music are mass distributed to dilute our royalties or, worse yet, reward those actively seeking to commit fraud.  Artists need to know the difference – all AI platforms are not the same, and Suno, which is being sued for copyright infringement, is not a platform artists should trust.

Responsible AI-generated music must evolve within a framework that respects and remunerates artists, enhances human creativity rather than supplants it, and empowers fans to engage with the music they love.  At the same time, AI services must preclude mass distribution of slop and prevent fraudsters from destroying the very ecosystem that has been built to reward and sustain artists and audiences alike.

All of us, including billions of music fans, share an urgent, deep and abiding interest in protecting and rewarding human genius, even as AI continues to change our industry and the world in unimaginable ways.  So in 2026, even as the Louvre continues to revamp its own approach to security, we in the arts must rise to confront those who would “smash-and-grab” our creativity for their own benefit.

Together, while embracing innovation, we must work to establish more effective safeguards – both legal and technological – that better promote and protect all creative artists, our intellectual property, and the spark of human genius.

Say no to Suno. Say yes to the beauty and bounty of the gardens that feed us all.

Signed: 

Ron Gubitz, Executive Director, Music Artist Coalition

Helienne Lindvall, Songwriter and President, European Composer and Songwriter Alliance

David C. Lowery, Artist and Editor The Trichordist

Tift Merritt artist, Practitioner in Residence, Duke University and Artist Rights Alliance Board Member

Blake Morgan, artist, producer, and President of ECR Music Group.

Abby North, President, North Music Group

Chris Castle, Artist Rights Institute

Meet the New AI Boss, Worse Than the Old Internet Boss

Congress is considering several legislative packages to regulate AI. AI is a system that was launched globally with no safety standards, no threat modeling, and no real oversight. A system that externalized risk onto the public, created enormous security vulnerabilities, and then acted surprised when criminals, hostile states, and bad actors exploited it.

After the damage was done, the same companies that built it told governments not to regulate—because regulation would “stifle innovation.” Instead, they sold us cybersecurity products, compliance frameworks, and risk-management services to fix the problems they created.

Yes, artificial intelligence is a problem. Wait…Oh, no sorry. That’s not AI.

That’s was Internet. And it made the tech bros the richest ruling class in history.

And that’s why some of us are just a little skeptical when the same tech bros are now telling us: “Trust us, this time will be different.” AI will be different, that’s for sure. They’ll get even richer and they’ll rip us off even more this time. Not to mention building small nuclear reactors on government land that we paid for, monopolizing electrical grids that we paid for, and expecting us to fill the landscape with massive power lines that we will pay for.

The topper is that these libertines want no responsibility for anything, and they want to seize control of the levers of government to stop any accountability. But there are some in Congress who are serious about not getting fooled again.

Senator Marsha Blackburn released a summary of legislation she is sponsoring that gives us some cause for hope (read it here courtesy of our friends at the Copyright Alliance). Because her bill might be effective, that means Silicon Valley shills will be all over it to try to water it down and, if at all possible, destroy it. That attack of the shills has already started with Silicon Valley’s AI Viceroy in the Trump White House, a guy you may never have heard of named David Sacks. Know that name. Beware that name.

Senator Blackburn’s bill will do a lot of good things, including for protecting copyright. But the first substantive section of Senator Blackburn’s summary is a game changer. She would establish an obligation on AI platforms to be responsible for known or predictable harm that can befall users of AI products. This is sometimes called a “duty of care.”

Her summary states:

Place a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users. Additionally, this section requires:

• AI platforms to conduct regular risk assessments of how algorithmic systems, engagement mechanics, and data practices contribute to psychological, physical, financial, and exploitative harms.

• The Federal Trade Commission (FTC) to promulgate rules establishing minimum reasonable safeguards.

At its core, Senator Blackburn’s AI bill tries to force tech companies to play by rules that most other industries have followed for decades: if you design a product that predictably harms people, you have a responsibility to fix it.

That idea is called “products liability.” Simply put, it means companies can’t sell dangerous products and then shrug it off when people get hurt. Sounds logical, right? Sounds like what you would expect would happen if you did the bad thing? Car makers have to worry about the famous exploding gas tanks. Toy manufacturers have to worry about choking hazards. Drug companies have to test side effects. Tobacco companies….well, you know the rest. The law doesn’t demand perfection—but it does demand reasonable care and imposes a “duty of care” on companies that put dangerous products into the public.

Blackburn’s bill would apply that same logic to AI platforms. Yes, the special people would have to follow the same rules as everyone else with no safe harbors.

Instead of treating AI systems as abstract “speech” or neutral tools, the bill treats them as what they are: products with design choices. Those choices that can foreseeably cause psychological harm, financial scams, physical danger, or exploitation. Recommendation algorithms, engagement mechanics, and data practices aren’t accidents. They’re engineered. At tremendous expense. One thing you can be sure of is that if Google’s algorithms behave a certain way, it’s not because the engineers ran out of development money. The same is true of ChatGPT, Grok, etc. On a certain level of reality, this is very likely not guess work or predictability. It’s “known” rather than “should have known.” These people know exactly what their algorithms do. And they do it for the money.

The bill would impose that duty of care on AI developers and platform operators. A duty of care is a basic legal obligation to act reasonably to prevent foreseeable harm. “Foreseeable” doesn’t mean you can predict the exact victim or moment—it means you can anticipate the type of harm that flows to users you target from how the system is built.

To make that duty real, the bill would require companies to conduct regular risk assessments and make them public. These aren’t PR exercises. They would have to evaluate how their algorithms, engagement loops, and data use contribute to harms like addiction, manipulation, fraud, harassment, and exploitation.

They do this already, believe it. What’s different is that they don’t make it public, anymore than Ford made public the internal research that the Pinto’s gas tank was likely to explode. In other words, platforms would have to look honestly at what their systems actually do in the world—not just what they claim to do.

The bill also directs the Federal Trade Commission (FTC) to write rules establishing minimum reasonable safeguards. That’s important because it turns a vague obligation (“be responsible”) into enforceable standards (“here’s what you must do at a minimum”). Think of it as seatbelts and crash tests for AI systems.

So why do tech companies object? Because many of them argue that their algorithms are protected by the First Amendment—that regulating how recommendations work is regulating speech. Yes, that is a load of crap. It’s not just you, it really is BS.

Imagine Ford arguing that an exploding gas tank was “expressive conduct”—that drivers chose the Pinto to make a statement, and therefore safety regulation would violate Ford’s free speech rights. No court would take that seriously. A gas tank is not an opinion. It’s an engineered component with known risks and risks that were known to the manufacturer.

AI platforms are the same. When harm flows from design decisions—how content is ranked, how users are nudged, how systems optimize for engagement—that’s not speech. That’s product design. You can measure it, test it, audit it, which they do and make it safer which they don’t.

This part of Senator Blackburn’s bill matters because platform design shapes culture, careers, and livelihoods. Algorithms decide what gets seen, what gets buried, and what gets exploited. Blackburn’s bill doesn’t solve every problem, but it takes an important step: it says tech companies can’t hide dangerous products behind free-speech rhetoric anymore.

If you build it, and it predictably hurts people, you’re responsible for fixing it. That’s not censorship. It’s accountability. And people like Marc Andreessen, Sam Altman, Elon Musk and David Sacks will hate it.