Hallucinations Create Complications When AI Goes to Court
CIOs need to talk to their legal teams about AI “clarity, boundaries, and accountability,” one expert said.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
Judges are losing their patience as AI mistakes pile up in US courts.
The number of cases in which the tech has generated incorrect information or cited nonexistent cases to support lawyers’ arguments is both alarming and increasing. In early June, the Utah Court of Appeals found that two lawyers had breached procedural rules by submitting a legal document citing multiple cases that did not existt.
That wasn’t an isolated incident. The Washington Post reported that court documents generated by AI, which include hallucinations, are rising across the country. These incidents are also tripping up large tech firms, such as model provider Anthropic, which admitted in mid-May to using an erroneous citation generated by its Claude AI chatbot in its legal battle with Universal Music Group and other publishers. One researcher found more than 150 incidents of hallucination filed in US courts since 2023, with 103 of those occurring this year.
And using AI incorrectly in this context has consequences, including being referred to professional bodies and responsibility boards, fines of up to 1% of case value, court warnings, class action petitions, and sanctions costing thousands of dollars.
AI is not a legal gray area anymore, and regulators are starting to enforce disclosures, transparency, and bias mitigation requirements, Mark G. McCreary, partner and chief AI and information security officer at Fox Rothschild, told CIO Upside.
Clarifying where attorney-client privilege or trade secret risks arise when using external tools is a must, according to McCreary. That involves determining what data is being put into the AI tools that legal teams use.
“As CIO, I’d focus the conversation (with compliance officers and lawyers) on clarity, boundaries, and accountability,” he said.
Innovation vs. Risks
A 2024 Thomson Reuters survey found that US lawyers using AI can save up to 266 million hours. That would translate into $100,000 in new, billable time per lawyer each year. The study also found that only 16% of lawyers think that using AI to draft documents is “going too far.”
Even if a company has AI legal policies in place, workers could be ignoring them, Wyatt Mayham, lead AI consultant at Northwest AI Consulting, told CIO Upside. “If policies exist, but aren’t enforced or tracked, they’re worthless,” he said.
McCreary advised companies to establish a light but structured governance framework. “The point is not to restrict innovation, but to track and guide usage,” McCreary said:
- Self-registering legal teams to AI apps helps create logs that reveal who used AI, for what purpose, and what type of data was used, creating a system of record. Logs also can be used to ensure no sensitive data is being fed to non-compliant tools, or used in a way that violates ethics rules or client contracts, McCreary said.
- Legal teams need to be continually updated on how AI tool capabilities – and risks – evolve, he said. “An AI feature that’s benign today might add model training next quarter.”
- McCreary noted that an AI recognition program can also incentivize transparency and caution in using new tools, while reinforcing positive behavior. These programs can create a culture of AI literacy, not just compliance.
As flawed AI inputs in legal cases continue to emerge, the reputations of firms, companies and clients are at stake. Courts have already made it clear that tolerance for the improper use of AI and AI hallucinations is low. Despite the push across industries to adopt AI, the damage may outweigh the benefits.
“Waiting for the law to ‘catch up’ is no longer an excuse; enterprise AI governance is not just an IT issue, it’s a legal, reputational and strategic issue,” said McCreary.