Defamation and AI
Law, disrupted|法律访谈

Defamation and AI

17分钟 105 2天前
节目简介
来源:小宇宙
John is joined by Robert M. (“Bobby”) Schwartz, partner in Quinn Emanuel’s Los Angeles office and co-chair of the firm’s Media & Entertainment Industry Practice, and Marie M. Hayrapetian, associate in Quinn Emanuel’s Los Angeles office. They discuss recent cases testing whether large language model AI outputs may give rise to defamation claims.
In one recent Georgia case, a journalist asked ChatGPT about a lawsuit and received a response stating that a company executive was an embezzler, even though the lawsuit did not involve any such allegations and he was not an embezzler. In another case, Google was sued after its AI overview tool incorrectly stated that a business was being sued by the Minnesota state attorney general for deceptive practices, an allegation that allegedly caused up to $200 million in lost sales. Other examples involve sexualized deepfake images allegedly generated from ordinary photos, creating reputational and privacy harms.
Defamation law assumes a human speaker who publishes a false factual statement with some degree of fault. AI systems complicate that framework. In the case of LLM outputs, it is unclear who the speaker is. Is it the platform, the data scientists behind the platform, the user who created the prompt, or the model itself? It is also difficult to fit AI output into doctrines requiring intent, knowledge, or reckless disregard, especially in public figure cases that require proof of actual malice.
In the Georgia case, the defense won a motion for summary judgment. The court concluded that the output would not reasonably be understood as stating actual facts because the system provided warnings about limitations and potential errors. That reasoning may be vulnerable on appeal, but it shows one approach courts may adopt to reject these claims.
Republication may also result in liability. If someone republishes defamatory AI output as fact, ordinary defamation principles could apply. An unresolved issue is whether the Section 230 safe harbor protects platforms when AI output is generated through interactions between user prompts and the model.
Current defamation law might ultimately be a poor fit for AI-generated speech. Assessing liability for AI-generated speech may eventually require a different legal framework, such as product liability law.

加入我们的 Discord

与播客爱好者一起交流

立即加入

扫描微信二维码

添加微信好友,获取更多播客资讯

微信二维码

播放列表

自动播放下一个

播放列表还是空的

去找些喜欢的节目添加进来吧