What Don Draper Knew That AI Forgot: Authorship, Ownership, and Advertising

David is pointing to a quiet but serious problem hiding behind the rush to use generative AI in advertising, film, and television: copyright law protects authorship, not outputs. AI muddies or even erases authorship altogether in some cases

Under current U.S. Copyright Office guidance, works generated primarily by AI are often not registrable in the Copyright Office because they lack a human author exercising creative control. That means a brand that relies on AI to generate a commercial may not actually own exclusive rights in the finished work. If someone copies, remixes, or repurposes that ad, even in a way that damages the brand, the company may have little or no legal recourse under copyright law.

The Copyright Office guidance says:

In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans. The Office’s registration policies and regulations reflect statutory and judicial guidance on this issue….If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user

David has not identified a theoretical risk. Copyright is the backbone of brand control in media. It’s what allows companies to stop misuse, dilution, parody-turned-weapon, or hostile appropriation. In the US, a copyright registration is required to protect those rights. Remove that protection, and brands are left relying on weaker tools like trademark or unfair competition law, which are narrower, slower, and often ill-suited to digital remix culture.

David’s warning extends beyond ads. Film and TV studios experimenting with AI-generated scripts, scenes, music, or visuals may be undermining their own ability to control, license, or defend those works. In trying to save money upfront, they may be giving up the legal leverage that protects their brand, reputation, and long-term value.

Meet the New AI Boss, Worse Than the Old Internet Boss

Congress is considering several legislative packages to regulate AI. AI is a system that was launched globally with no safety standards, no threat modeling, and no real oversight. A system that externalized risk onto the public, created enormous security vulnerabilities, and then acted surprised when criminals, hostile states, and bad actors exploited it.

After the damage was done, the same companies that built it told governments not to regulate—because regulation would “stifle innovation.” Instead, they sold us cybersecurity products, compliance frameworks, and risk-management services to fix the problems they created.

Yes, artificial intelligence is a problem. Wait…Oh, no sorry. That’s not AI.

That’s was Internet. And it made the tech bros the richest ruling class in history.

And that’s why some of us are just a little skeptical when the same tech bros are now telling us: “Trust us, this time will be different.” AI will be different, that’s for sure. They’ll get even richer and they’ll rip us off even more this time. Not to mention building small nuclear reactors on government land that we paid for, monopolizing electrical grids that we paid for, and expecting us to fill the landscape with massive power lines that we will pay for.

The topper is that these libertines want no responsibility for anything, and they want to seize control of the levers of government to stop any accountability. But there are some in Congress who are serious about not getting fooled again.

Senator Marsha Blackburn released a summary of legislation she is sponsoring that gives us some cause for hope (read it here courtesy of our friends at the Copyright Alliance). Because her bill might be effective, that means Silicon Valley shills will be all over it to try to water it down and, if at all possible, destroy it. That attack of the shills has already started with Silicon Valley’s AI Viceroy in the Trump White House, a guy you may never have heard of named David Sacks. Know that name. Beware that name.

Senator Blackburn’s bill will do a lot of good things, including for protecting copyright. But the first substantive section of Senator Blackburn’s summary is a game changer. She would establish an obligation on AI platforms to be responsible for known or predictable harm that can befall users of AI products. This is sometimes called a “duty of care.”

Her summary states:

Place a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users. Additionally, this section requires:

• AI platforms to conduct regular risk assessments of how algorithmic systems, engagement mechanics, and data practices contribute to psychological, physical, financial, and exploitative harms.

• The Federal Trade Commission (FTC) to promulgate rules establishing minimum reasonable safeguards.

At its core, Senator Blackburn’s AI bill tries to force tech companies to play by rules that most other industries have followed for decades: if you design a product that predictably harms people, you have a responsibility to fix it.

That idea is called “products liability.” Simply put, it means companies can’t sell dangerous products and then shrug it off when people get hurt. Sounds logical, right? Sounds like what you would expect would happen if you did the bad thing? Car makers have to worry about the famous exploding gas tanks. Toy manufacturers have to worry about choking hazards. Drug companies have to test side effects. Tobacco companies….well, you know the rest. The law doesn’t demand perfection—but it does demand reasonable care and imposes a “duty of care” on companies that put dangerous products into the public.

Blackburn’s bill would apply that same logic to AI platforms. Yes, the special people would have to follow the same rules as everyone else with no safe harbors.

Instead of treating AI systems as abstract “speech” or neutral tools, the bill treats them as what they are: products with design choices. Those choices that can foreseeably cause psychological harm, financial scams, physical danger, or exploitation. Recommendation algorithms, engagement mechanics, and data practices aren’t accidents. They’re engineered. At tremendous expense. One thing you can be sure of is that if Google’s algorithms behave a certain way, it’s not because the engineers ran out of development money. The same is true of ChatGPT, Grok, etc. On a certain level of reality, this is very likely not guess work or predictability. It’s “known” rather than “should have known.” These people know exactly what their algorithms do. And they do it for the money.

The bill would impose that duty of care on AI developers and platform operators. A duty of care is a basic legal obligation to act reasonably to prevent foreseeable harm. “Foreseeable” doesn’t mean you can predict the exact victim or moment—it means you can anticipate the type of harm that flows to users you target from how the system is built.

To make that duty real, the bill would require companies to conduct regular risk assessments and make them public. These aren’t PR exercises. They would have to evaluate how their algorithms, engagement loops, and data use contribute to harms like addiction, manipulation, fraud, harassment, and exploitation.

They do this already, believe it. What’s different is that they don’t make it public, anymore than Ford made public the internal research that the Pinto’s gas tank was likely to explode. In other words, platforms would have to look honestly at what their systems actually do in the world—not just what they claim to do.

The bill also directs the Federal Trade Commission (FTC) to write rules establishing minimum reasonable safeguards. That’s important because it turns a vague obligation (“be responsible”) into enforceable standards (“here’s what you must do at a minimum”). Think of it as seatbelts and crash tests for AI systems.

So why do tech companies object? Because many of them argue that their algorithms are protected by the First Amendment—that regulating how recommendations work is regulating speech. Yes, that is a load of crap. It’s not just you, it really is BS.

Imagine Ford arguing that an exploding gas tank was “expressive conduct”—that drivers chose the Pinto to make a statement, and therefore safety regulation would violate Ford’s free speech rights. No court would take that seriously. A gas tank is not an opinion. It’s an engineered component with known risks and risks that were known to the manufacturer.

AI platforms are the same. When harm flows from design decisions—how content is ranked, how users are nudged, how systems optimize for engagement—that’s not speech. That’s product design. You can measure it, test it, audit it, which they do and make it safer which they don’t.

This part of Senator Blackburn’s bill matters because platform design shapes culture, careers, and livelihoods. Algorithms decide what gets seen, what gets buried, and what gets exploited. Blackburn’s bill doesn’t solve every problem, but it takes an important step: it says tech companies can’t hide dangerous products behind free-speech rhetoric anymore.

If you build it, and it predictably hurts people, you’re responsible for fixing it. That’s not censorship. It’s accountability. And people like Marc Andreessen, Sam Altman, Elon Musk and David Sacks will hate it.

It’s Back: The National Defense Authorization Act Is No Place for a Backroom AI Moratorium

David Sacks Is Bringing Back the AI Moratorium

WHAT’S AT STAKE

The moratorium would block states from enforcing their own laws on AI accountability, deepfakes, consumer protection, energy policy, discrimination, and data rights. Tennessee’s ELVIS Act is a prime example. For ten years — or five years in the “softened” version — the federal government would force states to stand down while some of the most richest and powerful monopolies in commercial history continue deploying models trained on unlicensed works, scraped data, personal information, and everything in between. Regardless of whether it is ten years or five years, either may as well be an eternity in Tech World. Particularly since they don’t plan on following the law anyway with their “move fast and skip things” mentality.

Ted Turns Texas Glowing

99-1/2 just won’t do—Remember the AI moratorium that was defeated 99-1 in the Senate during the heady days of the One Big Beautiful Bill Act? We said it would come back in the must-pass National Defense Authorization Act and sure enough that’s exactly where it is courtesy of Senator and 2028 Presidential hopefull Ted Cruz (fundraising off of the Moratorium no doubt for his “Make Texas California Again” campaign) and other Big Tech sycophants according to a number of sources including Politico and the Tech Policy Press:

It…remains to be seen when exactly the moratorium issue may be taken up, though a final decision could still be a few weeks away.

Congressional leaders may either look to include the moratorium language in their initial NDAA agreement, set to be struck soon between the two chambers, or take it up as a separate amendment when it hits the floor in the House and Senate next month.

Either way, they likely will need to craft a version narrow enough to overcome the significant opposition to its initial iterations. While House lawmakers are typically able to advance measures with a simple majority or party-line vote, in the Senate, most bills require 60 votes to pass, meaning lawmakers must secure bipartisan support.

The pushback from Democrats is already underway. Sen. Brian Schatz (D-HI), an influential figure in tech policy debates and a member of the Senate Commerce Committee, called the provision “a poison pill” in a social media post late Monday, adding, “we will block it.”

Still, the effort has the support of several top congressional Republicans, who have repeatedly expressed their desire to try again to tuck the bill into the next available legislative package.

In Washington, must-pass bills invite mischief. And right now, House leadership is flirting with the worst kind: slipping a sweeping federal moratorium on state AI laws into the National Defense Authorization Act (NDAA).

This idea was buried once already — the Senate voted 99–1 to strike it from Trump’s earlier “One Big Beautiful Bill.” But instead of accepting that outcome, Big Tech trying to resurrect it quietly, through a bill that is supposed to fund national defense, not rewrite America’s entire AI legal structure.

The NDAA is the wrong vehicle, the wrong process, and the wrong moment to hand Big Tech blanket immunity from state oversight. As we have discussed many times the first time around, the concept is probably unconstitutional for a host of reasons and will no doubt be immediately challenged.

AI Moratorium Lobbying Explainer for Your Electric Bill

Here are the key shilleries pushing the federal AI moratorium and their backers:

Lobby Shop / OrganizationSupporters / FundersRole in Pushing MoratoriumNotes
INCOMPAS / AI Competition Center (AICC)Amazon, Google, Meta, Microsoft, telecom/cloud companiesLeads push for 10-year state-law preemption; argues moratorium prevents ‘patchwork’ lawsIdentified as central industry driver
Consumer Technology Association (CTA)Big Tech, electronics & platform economy firmsLobbying for federal preemption; opposed aggressive state AI lawsHigh influence with Commerce/Appropriations staff
American Edge ProjectMeta-backed advocacy orgFrames preemption as necessary for U.S. competitiveness vs. China; backed moratoriumUsed as indirect political vehicle for Meta
Abundance InstituteTech investors, deregulatory donorsArgues moratorium necessary for innovation; publicly predicts return of moratoriumMessaging aligns with Silicon Valley VCs
R Street InstituteMarket-oriented donors; tech-aligned fundersOriginated ‘learning period’ moratorium concept in 2024 papers by Adam ThiererNot a lobby shop but provides intellectual framework
Corporate Lobbyists (Amazon/Google/Microsoft/Meta/OpenAI/etc.)Internal lobbying shops + outside firmsPromote ‘uniform national standards’ in Congressional meetingsOperate through and alongside trade groups

PARASITES GROW IN THE DARK: WHY THE NDAA IS THE ABSOLUTE WRONG PLACE FOR THIS

The National Defense Authorization Act is one of the few bills that must pass every year. That makes it a magnet for unrelated policy riders — but it doesn’t make those riders legitimate.

An AI policy that touches free speech, energy policy and electricity rates, civil rights, state sovereignty, copyright, election integrity, and consumer safety deserves open hearings, transparent markups, expert testimony, and a real public debate. And that’s the last thing the Big Tech shills want.

THE TIMING COULD NOT BE MORE INSULTING

Big Tech is simultaneously lobbying for massive federal subsidies for compute, federal preemption of state AI rules, and multi-billion-dollar 765-kV transmission corridors to feed their exploding data-center footprints.

And who pays for those high-voltage lines? Ratepayers do. Utilities that qualify as political subdivisions in the language of the moratorium—such as municipal utilities, public power districts, and cooperative systems—set rates through their governing boards rather than state regulators. These boards must recover the full cost of service, including new infrastructure needed to meet rising demand. Under the moratorium’s carve-outs, these entities could be required to accept massive AI-driven load increases, even when those loads trigger expensive upgrades. Because cost-of-service rules forbid charging AI labs above their allocated share, the utility may have no choice but to spread those costs across all ratepayers. Residents, not the AI companies, would absorb the rate hikes.

States must retain the power to protect their citizens. Congress has every right to legislate on AI. But it does not have the right to erase state authority in secret to save Big Tech from public accountability.

A CALL TO ACTION

Tell your Members of Congress:
No AI moratorium in the NDAA.
No backroom preemption.
No Big Tech giveaways in the defense budget.

@DavidSacks Isn’t a Neutral Observer—He’s an Architect of the AI Circular-Investment Maze

When White House AI Czar David Sacks tweets confidently that “there will be no federal bailout for AI” because “five major frontier model companies” will simply replace each other, he is not speaking as a neutral observer. He is speaking as a venture capitalist with overlapping financial ties to the very AI companies now engaged in the most circular investment structure Silicon Valley has engineered since the dot-com bubble—but on a scale measured not in millions or even billions, but in trillions.

Sacks is a PayPal alumnus turned political-tech kingmaker who has positioned himself at the intersection of public policy and private AI investment. His recent stint as a Special Government Employee to the federal government raised eyebrows precisely because of this dual role. Yet he now frames the AI sector as a robust ecosystem that can absorb firm-level failure without systemic consequence.

The numbers say otherwise. The diagram circulating in the X-thread exposes the real structure: mutually dependent investments tied together through cross-equity stakes, GPU pre-purchases, cloud-compute lock-ins, and stock-option-backed revenue games. So Microsoft invests in OpenAI; OpenAI pays Microsoft for cloud resources; Microsoft books the revenue and inflates its stake OpenAI. Nvidia invests in OpenAI; OpenAI buys tens of billions in Nvidia chips; Nvidia’s valuation inflates; and that valuation becomes the collateral propping up the entire sector. Oracle buys Nvidia chips; OpenAI signs a $300 billion cloud deal with Oracle; Oracle books the upside. Every player’s “growth” relies on every other player’s spending.

This is not competition. It is a closed liquidity loop. And it’s a repeat of the dot-bomb “carriage” deals that contributed to the stock market crash in 2000.

And underlying all of it is the real endgame: a frantic rush to secure taxpayer-funded backstops—through federal energy deals, subsidized data-center access, CHIPS-style grants, or Department of Energy land leases—to pay for the staggering infrastructure costs required to keep this circularity spinning. The singularity may be speculative, but the push for a public subsidy to sustain it is very real.

Call it what it is: an industry searching for a government-sized safety net while insisting it doesn’t need one.

In the meantime, the circular investing game serves another purpose: it manufactures sky-high paper valuations that can be recycled into legal war chests. Those inflated asset values are now being used to bankroll litigation and lobbying campaigns aimed at rewriting copyright, fair use, and publicity law so that AI firms can keep strip-mining culture without paying for it.

The same feedback loop that props up their stock prices is funding the effort to devalue the work of every writer, musician, actor, and visual artist on the planet—and to lock that extraction in as a permanent feature of the digital economy.

Artist Rights Are Innovation, Too! White House Opens AI Policy RFI and Artists Should Be Heard

The White House has opened a major Request for Information (RFI) on the future of artificial intelligence regulation — and anyone can submit a comment. That means you. This is not just another government exercise. It’s a real opportunity for creators, musicians, songwriters, and artists to make their voices heard in shaping the laws that will govern AI and its impact on culture for decades to come.

Too often, artists find out about these processes after the decisions are already made. This time, we don’t have to be left out. The comment period is open now, and you don’t need to be a lawyer or a lobbyist to participate — you just need to care about the future of your work and your rights. Remember—property rights are innovation, too, just ask Hernando de Soto (Mystery of Capital) or any honest economist.

Here are four key issues in the RFI that matter deeply to artists — and why your voice is critical on each:


1. Transparency and Provenance: Artists Deserve to Know When Their Work Is Used

One of the most important questions in the RFI asks how AI companies should document and disclose the creative works used to train their models. Right now, most platforms hide behind trade secrets and refuse to reveal what they ingested. For artists, that means you might never know if your songs, photographs, or writing were taken without permission — even if they now power billion-dollar AI products.

This RFI is a chance to demand real provenance requirements: records of what was used, when, and how. Without this transparency, artists cannot protect their rights or seek compensation. A strong public record of support for provenance could shape future rules and force platforms into accountability.


2. Derivative Works and AI Memory: Creativity Shouldn’t Be Stolen Twice

The RFI also raises a subtle but crucial issue: even if companies delete unauthorized copies of works from their training sets, the models still retain and exploit those works in their weights and “memory.” This internal use is itself a derivative work — and it should be treated as one under the law.

Artists should urge regulators to clarify that training outputs and model weights built from copyrighted material are not immune from copyright. This is essential to closing a dangerous loophole: without it, platforms can claim to “delete” your work while continuing to profit from its presence inside their AI systems.


3. Meaningful Opt-Out: Creators Must Control How Their Work Is Used

Another critical question is whether creators should have a clear, meaningful opt-out mechanism that prevents their work from being used in AI training or generation without permission. As Artist Rights Institute and many others have demonstrated, “Robots.txt” disclaimers buried in obscure places are not enough. Artists need a legally enforceable system—not another worthless DMCA-style notice and notice and notice and notice and notice and maybe takedown system that platforms must respect and that regulators can audit.

A robust opt-out system would restore agency to creators, giving them the ability to decide if, when, and how their work enters AI pipelines. It would also create pressure on companies to build legitimate licensing systems rather than relying on theft.


4. Anti-Piracy Rule: National Security Is Not a License to Steal

Finally, the RFI invites comment on how national priorities should shape AI development and it’s vital that artists speak clearly here. There must be a bright-line rule that training AI models on pirated content is never excused by national security or “public interest” arguments. This is a real thing—pirate libraries are clearly front and center in AI litigation which have largely turned into piracy cases because the AI lab “national champions” steal books and everything else.

If a private soldier stole a carton of milk from a chow hall, he’d likely lose his security clearance. Yet some AI companies have built entire models on stolen creative works and now argue that government contracts justify their conduct. That logic is backwards. A nation that excuses intellectual property theft in the name of “security” corrodes the rule of law and undermines the very innovation it claims to protect. On top of it, the truth of the case is that the man Zuckerberg is a thief, yet he is invited to dinner at the White House.

A clear anti-piracy rule would ensure that public-private partnerships in AI development follow the same legal and ethical standards we expect of every citizen — and that creators are not forced to subsidize government technology programs with uncompensated labor. Any “AI champion” who steals should lose or be denied a security clearance.


Your Voice Matters — Submit a Comment

The White House needs to hear directly from creators — not just from tech companies and trade associations. Comments from artists, songwriters, and creative professionals will help shape how regulators understand the stakes and set the boundaries.

You don’t need legal training to submit a comment. Speak from your own experience: how unauthorized use affects your work, why transparency matters, what a meaningful opt-out would look like, and why piracy can never be justified by national security.

👉 Submit your comment here before the October 27 deadline.

Senator Josh @HawleyMO Throws Down on Big Tech’s Copyright Theft

 I believe Americans should have the ability to defend their human data, and their rights to that data, against the largest copyright theft in the history of the world. 

Millions of Americans have spent the past two decades speaking and engaging online. Many of you here today have online profiles and writings and creative productions that you care deeply about. And rightly so. It’s your work. It’s you.

What if I told you that AI models have already been trained on enough copyrighted works to fill the Library of Congress 22 times over? For me, that makes it very simple: We need a legal mechanism that allows Americans to freely defend those creations. I say let’s empower human beings by protecting the very human data they create. Assign property rights to specific forms of data, create legal liability for the companies who use that data and, finally, fully repeal Section 230. Open the courtroom doors. Let the people sue those who take their rights, including those who do it using AI.

Third, we must add sensible guardrails to the emergent AI economy and hold concentrated economic power to account. These giant companies have made no secret of their ambitions to radically reshape our economic life. So, we ought to require transparency and reporting each time they replace a working man with a machine.

And the government should inspect all of these frontier AI systems, so we can better understand what the tech titans plan to build and deploy. 

Ultimately, when it comes to guardrails, protecting our children should be our lodestar. You may have seen recently how Meta green-lit its own chatbots to have sensual conversations with children—yes, you heard me right. Meta’s own internal documents permitted lurid conversations that no parent would ever contemplate. And most tragically, ChatGPT recently encouraged a troubled teenager to commit suicide—even providing detailed instructions on how to do it.

We absolutely must require and enforce rigorous technical standards to bar inappropriate or harmful interactions with minors. And we should think seriously about age verification for chatbots and agents. We don’t let kids drive or drink or do a thousand other harmful things. The same standards should apply to AI.

Fourth and finally, while Congress gets its act together to do all of this, we can’t kneecap our state governments from moving first. Some of you may have seen that there was a major effort in Congress to ban states from regulating AI for 10 years—and a whole decade is an eternity when it comes to AI development and deployment. This terrible policy was nearly adopted in the reconciliation bill this summer, and it could have thrown out strong anti-porn and child online safety laws, to name a few. Think about that: conservatives out to destroy the very concept of federalism that they cherish … all in the name of Big Tech. Well, we killed it on the Senate floor. And we ought to make sure that bad idea stays dead.

We’ve faced technological disruption before—and we’ve acted to make technology serve us, the people. Powered flight changed travel forever, but you can’t land a plane on your driveway. Splitting the atom fundamentally changed our view of physics, but nobody expects to run a personal reactor in their basement. The internet completely recast communication and media, but YouTube will still take down your video if you violate a copyright. By the same token, we can—and we should—demand that AI empower Americans, not destroy their rights . . . or their jobs . . . or their lives.

Don’t forget tomorrow—Artist Rights Roundtable on AI and Copyright at American University in Washington DC

Artist Rights Roundtable on AI and Copyright: 
Coffee with Humans and the Machines     

Join the Artist Rights Institute (ARI) and American University’s Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

This roundtable is particularly timely because both the Bartz and Kadrey rulings expose gaps in author consent, provenance, and fair licensing, underscoring an urgent need for policy, identifiers, and enforceable frameworks to protect creators.

 🗓️ Date: September 18, 2025
🕗 Time: 8:00 a.m. – 12:00 noon
📍 Location: Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016

🎟️ Admission: Free and open to the public. Registration required at Eventbrite. Seating is limited.

🅿️ Parking map is available here. Pay-As-You-Go parking is available in hourly or daily increments ($2/hour, or $16/day) using the pay stations in the elevator lobbies of Katzen Arts Center, East Campus Surface Lot, the Spring Valley Building, Washington College of Law, and the School of International Service

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by KOGOD Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:

  • Dr. Moiya McTier, Senior Advisor, Human Artistry Campaign
  • Ryan Lehnning, Assistant General Counsel, International at SoundExchange
  • The Chatbot

Moderator: Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation

  • Speaker: Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:

  • Erin McAnally, Executive Director, Songwriters of North America
  • Jen Jacobsen, Executive Director, Artist Rights Alliance
  • Josh Hurvitz, Partner, NVG and Head of Advocacy for A2IM
  • Kevin Amer, Chief Legal Officer, The Authors Guild

Moderator: Linda Bloss-Baum, Director, Business and Entertainment Program, KOGOD School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

  • Speaker: George York, SVP, International Policy Recording Industry Association of America

🎟️ Admission:

Free and open to the public. Registration required at Eventbrite. Seating is limited.

🔗 Stay Updated:

Watch this space and visit Eventbrite for updates and speaker announcements.

@RickBeato on AI Artists

Is it at thing or is it disco? Our fave Rick Beato has a cautionary tale in this must watch video: AI can mimic but not truly create art. As generative tools get more prevalent, he urges thoughtful curation, artist-centered policies, and an emphasis on emotionally rich, human-driven creativity–also known as creativity. h/t Your Morning Coffee our favorite podcast.

@Unite4Copyright: Say “No” to Unlicensed AI Training

The biggest of Big Tech are scraping everything they can snarf down to train their AI–that means your Facebook, Instagram, YouTube, websites, Reddit, the works. Congress has to stop this–if. you are as freaked out about this as we are, join in the Copyright Alliance letter campaign here. It just takes a minute to send a personalized letter to Congress and the White House urging policymakers to protect creators’ rights and ensure fair compensation in the AI era.