|

When AI Hallucinations Amount to Defamation, Who’s Liable?

A handful of high-profile defamation-by-AI-chatbot allegations against big tech firms are already stacking up.

Photo illustration of a person striking a gavel down on a phone with AI hallucination messages from a chatbot
Photo illustration by Connor Lin / The Daily Upside, Photo by Nastco via iStock

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.

Everyone — or hopefully everyone — knows by now that artificial intelligence chatbots have a tendency to “hallucinate,” or ramble into responses that aren’t exactly the most factual or accurate. Sometimes, that just means an email containing an embarrassing error, or a paragraph in a college essay following a faulty train of thought. (Pro tip: Just do your own homework.) 

But what happens when those hallucinations are not just factually incorrect but also hurt real lives and real businesses?

That’s the conundrum facing defamation law — whose principles are based on comments, spoken or written, by living people — in the era of artificial intelligence. It’s made even more perplexing by the fact that legal boundaries for speech on the giant social media platforms of the past two decades are still being tested. Will the latest questions take the law just as long to figure out as the previous ones?

The Cost of Fame

A handful of high-profile allegations against AI firms are already stacking up. In a Georgia case dating to 2023, conservative talk radio host Mark Walters sued OpenAI after ChatGPT falsely told another journalist that Walters had been accused of embezzlement. In a somewhat similar case last April, right-wing influencer Robby Starbuck filed a lawsuit against Meta after its Llama image generator made it appear Starbuck was present when protestors stormed the US Capitol Building on January 6, 2021. Meanwhile, Republican Senator Marsha Blackburn excoriated Google’s open-source large language model Gemma in a New York Post column in November, claiming it falsely accused her of crimes as well. Senator Blackburn has thus far not pursued legal action against Google.

The high-profile cases have already worked through the system: Starbuck reached an undisclosed settlement with Meta in August, while the Walters case was dismissed in May when a judge found that the journalist in question understood the chatbot’s response to be a hallucination, undermining Walters’ defamation argument. 

But in Minnesota, another case is brewing involving a far less high-profile plaintiff and, seemingly, much more provable damages, which are key to successful defamation cases.

Electric Boogaloo

Wolf River Electric, a relatively low-profile solar contractor in the Land of 1,000 Lakes, alleges that Google’s Gemini model has repeatedly — and falsely —  stated that the company was sued by a Minnesota state attorney over deceptive sales practices. The firm claims the false statements led to a wave of client cancellations and missed sales.

“We put a lot of time and energy into building up a good name,” founder Justin Nielsen recently told The New York Times. “When customers see a red flag like that, it’s damn near impossible to win them back.”

The existence of provable damages and Wolf River Electric’s relatively “ordinary” status are why some legal experts see the lawsuit as particularly interesting. Private individuals and businesses have a lower burden of proof in US defamation cases than public companies, officials and celebrities.

“If you’re a public figure or a public official, then you have to show actual malice, i.e., intent,” Bernie Rhodes, a First Amendment and media law attorney at Lathrop GPM, told The Daily Upside. “But for the ordinary person or the ordinary company, going back to this Minnesota company, no intent is required.”

That makes the legal question much more clear-cut, he said: “It’s called generative AI, because it generates content. In that regard, generative AI is no different than traditional media, which generates content. And if traditional media gets it wrong, they can be sued for defamation … so when it happens with AI, the rules are the same.”

230 Reasons but AI Ain’t One

So does that mean AI companies may soon find themselves slammed with a tidal wave of defamation lawsuits, assuming their chatbots continue to make Wolf River Electric-esque errors? Maybe, maybe not.

“Liability for defamatory statements is a threat to AI companies just as taxi-licensing laws were a threat to Uber,” attorney Dave Wolkowitz told The Daily Upside.

When speaking to The New York Times earlier this year, Nina Brown, a communications professor and media law expert at Syracuse University, predicted that tech company defendants will attempt to settle any claim in which they could be vulnerable before it goes to court: “They don’t want the risk.” 

Translation: Giant AI companies with multi-trillion-dollar market caps will be fine in the long run, paying out the odd settlement to possibly defamed local businesses.

Unless, of course, national lawmakers establish federal frameworks first.

Policing AI speech is “not the kind of issue that you’d want different juries deciding on throughout the country,” Yale Law professor Robert Post told Politico earlier this year. “You’d want national standards laid out by people who are technically well-informed about what makes sense and what doesn’t.”

Thanks to Section 230, the provision in a somewhat contentious 1996 law that immunized websites from liability for posts by third-party users, tech giants are intimately familiar with leaning on national frameworks.

Just Algo With It: In layman’s terms, Section 230 means a plaintiff can sue a Facebook user for defamation posted on Facebook, but not Facebook itself. Because chatbots require human input — bots respond to specific queries, after all — it’s possible that AI firms could eventually argue that Section 230 covers the output of chatbots as well. However, multiple lawyers who spoke to The Daily Upside suggested such an argument is unlikely to carry water.

“Are AI companies going to argue that their chatbots are simply repeating what some other user said? Probably not, given that AI companies are motivated to position their chatbots as increasingly closer to actually ‘thinking,’” Wolkowitz said.

But social media companies have also sought to use Section 230 to shield themselves from liability for content their algorithms recommend — a development that may be more applicable to the AI moment.

Consider the 2023 Supreme Court case Gonzalez v. Google, which questioned whether Section 230 shielded YouTube from liability for algorithmically recommending terrorist-recruitment videos.

In the end, SCOTUS determined that other flaws in the complaint made it unnecessary to consider the Section 230 issues that the family of Nohemi Gonzalez — a 23-year-old American killed in a 2015 ISIS attack in Paris — sought to resolve.  That leaves the door open for tech giants to attempt using Section 230 as a defense when sued over the ideas and factual statements their chatbots recommend.

“I would fully expect these companies to make the 230 defense, but I don’t think it works,” Rhodes said. “[Google’s search engine] is not generating anything. Its algorithm is producing search results, but it’s not publishing content.” Users still have to click links, and make decisions for themselves. “But the whole selling point of chatbots is they’re going to do the work for me, and they’re going to generate the results, and I’m going to see them.”

Supreme Court Justice Neil Gorsuch pondered the same question during oral arguments in Gonzalez v. Google: “Artificial intelligence generates poetry. It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected.”

Old v. New: In other words, defamation law in the AI age may circle back to where it’s always been. “When blogs first came out, we thought, ‘Oh no, how is defamation law going to apply?’ When social media came out, we wet our pants over how defamation law applies,” Rhodes said. “But defamation law is defamation law. It’s just a matter of fitting traditional defamation law into new formats.”

For now, perhaps, the answer could be found in an old lesson rather than a new one: Don’t believe everything you read on the internet.

Sign Up for The Daily Upside to Unlock This Article
Sharp news & analysis on finance, economics, and investing.