|

Whatever Happened to All Those AI Copyright Lawsuits?

Last Tuesday, content conglomerate Thomson Reuters notched a big legal win against AI firm ROSS. Is it a sign of what’s to come?

Photo illustration of the Lady Justice statue with computer glitch overlays superimposed, symbolizing the legal battle over AI use
Photo illustration by Connor Lin / The Daily Upside, Photo by Grinvalds via iStock

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.

Last Tuesday, content conglomerate Thomson Reuters notched a big legal win. A judge ruled in its favor over a case filed back in 2020 — two years before the launch of ChatGPT — against a legal AI startup called ROSS Intelligence. Thomson Reuters was ahead of its time. In the years since then, publishers, authors, and even YouTubers have filed lawsuits against AI companies following the wave of AI-generated products that came out post-ChatGPT, and the pace of litigation is ever-increasing.

According to the blog of law firm Debevoise & Plimpton, more corporate plaintiffs filed complaints against AI companies last year than in any year previously. The momentum isn’t dissipating in 2025: Last week saw a major lawsuit filed by a clutch of publishers including Condé Nast, The Guardian, and Forbes, against AI startup Cohere. The ruling for Thomson Reuters will have a ripple effect on that growing mountain of litigation, although it’s not an unmitigated win for plaintiffs trying to take AI-makers down a peg.

All’s Fair Use

Specifically, Thomson Reuters sued ROSS over copyright infringement of Thomson Reuters’ legal search tool, Westlaw. ROSS had previously asked to license Westlaw’s material to train its AI, and Thomson Reuters had refused. Instead ROSS used data from a third-party company that had itself relied on Westlaw resources, so it essentially ended up ingesting Westlaw data anyway and then started regurgitating Westlaw’s “headnotes,” i.e., short summaries of points of law that appear in specific cases.

One of ROSS’ legal defenses was the “fair use” doctrine. In the US, fair use means that you’re allowed to use copyrighted materials for specific purposes such as parody or — more relevantly for AI companies — research. But fair use has its limitations: There are four factors that help judges decide whether it applies: 

  • The purpose and character of use: This encompasses elements including how “transformative” the new work is, i.e., whether it simply reproduces something copyrighted, or uses it to make something new.
  • Nature of the copyrighted work: Is it a highly original work like a piece of art or literature?
  • Substantiality of the copyrighted work used: Just how much of the copyrighted work was used or reproduced?
  • Market effects of the use: Does unlicensed use impact the market in which the original work operates?

ROSS actually passed the second and third tests, but it failed on factors one and four, with the judge flagging that the fourth factor was the most decisive, as he said ROSS’ product was “meant to compete with Westlaw by developing a market substitute.” 

“This case potentially shows a small chink in the legal arguments brought forward by AI developers, which effectively all rest on the US ‘fair use’ doctrine,” Felix Simon, a research associate at the Oxford Internet Institute specialising in AI and the media, told The Daily Upside. “This ruling at least hints at the opportunity that this doctrine cannot be used as a blanket defense in such cases,” he added.

Not Fair Everywhere: While fair use is a part of American copyright law, AI companies are looking at litigation all over the world. Dr. Alina Trapova, a lecturer in law at University College London, told The Daily Upside that US rulings wouldn’t necessarily spill over into how AI copyright cases are treated overseas. “In the EU and the UK, we do not have such an open-ended defense to copyright infringement, so the implications of those US decisions are there for that jurisdiction,” said Trapova. “That said, business and tech does not know borders, so I will not be surprised to see some shift in opinions when it comes to AI as a result of the US decisions. These do not affect the law in force in other places, but might encourage or discourage policymakers when they are revising the laws in place.”

Prehistoric AI

The ruling in Thomson Reuters v. ROSS is not a slam dunk for anti-AI plaintiffs, however, in large part because the AI that ROSS was using was not the generative AI we’ve gotten used to in a post-ChatGPT world. It was not able to riff and hallucinate. “AI companies will likely claim that their models provide something that is not a direct regurgitation of the original – as was the case for Ross – but something ‘new’, and that ‘fair use’ should therefore apply,” Simon said.

On a less technical level, ROSS also shut down in 2021 due to the costs of litigation. Behemoths like OpenAI, Microsoft, Meta, et al., have much deeper pockets so they can throw more lawyers at the problem for longer. 

Point of Originality: One point made by the judge in the case was that Westlaw’s copyrighted material wasn’t particularly original. Though we’re sure the lawyers tried to inject a little creative flair into their headnotes, neat summaries of points of law aren’t the most artistic type of prose out there. For plaintiffs protecting more original, creative products, that may work in their favor when trying to banish any “fair use” defense.

Who Said Anything About Fair?

The re-shading of what fair use means in an AI-ified world is significant, but plaintiffs are not solely relying on copyright claims to take AI companies to court. In a blog post reflecting on AI litigation over 2024, Debevoise & Plimpton noted that plaintiffs were using a more diverse set of legal attacks than just copyright infringement: “More recent cases have advanced a variety of claims, including trademark dilution, false advertising, right of publicity, and unfair competition claims that pose a new set of challenges for AI developers and companies that use generative AI outputs in their advertising and elsewhere,” the blog post said.

This was borne out when Condé Nast and other plaintiffs sued Cohere last week. The claim does include an accusation of copyright infringement — specifically that Cohere used their work as training data and then reproduced it word-for-word in some instances — but also trademark dilution. Specifically, the publishers said that Cohere attributed hallucinated (i.e. totally false) material to specific publishers, tarnishing their brands as news outlets. Cohere isn’t the only company with that issue:

  • Last month, publishers complained to Apple as its AI news summary feature was pushing out totally fabricated news headlines (e.g., Luigi Mangione, the suspect in the killing of an insurance executive in New York, shooting himself) and attributing the headlines to outlets including the BBC and The New York Times. Apple suspended the feature.
  • In June last year, Wired accused AI search engine startup Perplexity of not only scraping and reproducing its articles, but also of splicing in falsehoods and attributing them to the tech magazine. The same month, Forbes accused Perplexity of “cynical theft” after it published a story that looked extremely similar to Forbes’ reporting on a drone project led by former Google CEO Eric Schmidt.

Maybe imitation is the sincerest form of flattery, but half-baked imitation is just as easy to litigate.

Sign Up for The Daily Upside to Unlock This Article
Sharp news & analysis on finance, economics, and investing.