Now Reading
AI ‘Hallucinations’ in Court Filings Pose New Risks for Lawyers

AI ‘Hallucinations’ in Court Filings Pose New Risks for Lawyers

AI ‘Hallucinations’ in Court Filings Pose New Risks for Lawyers

The rise of artificial intelligence in the legal field is proving to be a double-edged sword. While AI-powered tools have significantly reduced the time required for legal research and drafting, they have also introduced an alarming new risk—generating false information, commonly referred to as AI ‘hallucinations.’

This issue was thrust into the spotlight recently when Morgan & Morgan, a well-known U.S. personal injury law firm, had to issue an urgent internal email warning its more than 1,000 lawyers about the dangers of using AI-generated case law. The firm’s message came in response to a federal judge in Wyoming, who threatened to impose sanctions on two of its attorneys for citing fictitious case law in a lawsuit against Walmart. One of the implicated lawyers admitted to using an AI tool that created these non-existent case citations, describing it as an unintentional mistake.

A Growing Concern Across U.S. Courts

The incident at Morgan & Morgan is not an isolated one. Over the past two years, courts across the United States have questioned or disciplined attorneys in at least seven cases involving AI-generated misinformation. As AI-driven chatbots like ChatGPT become more widely used in the legal profession, concerns about the integrity and reliability of legal filings have intensified.

The Walmart lawsuit is particularly noteworthy because it involves a major corporate defendant and a prominent law firm, yet similar cases have emerged in different areas of litigation. This trend underscores how the increasing reliance on AI for legal research has also heightened the risk of introducing false case law into official proceedings.

When asked for comment, a spokesperson for Morgan & Morgan did not respond, and Walmart declined to issue a statement. The Wyoming judge has not yet determined whether disciplinary action will be taken against the attorneys involved in the Walmart case, which centers on an allegedly defective hoverboard toy.

The Challenge of AI in Legal Research

The advent of generative AI has revolutionized how lawyers conduct research and draft legal briefs. Law firms are increasingly partnering with AI vendors or developing their own AI-based tools to streamline legal work. A 2023 survey by Thomson Reuters, the parent company of Reuters, found that 63% of lawyers had used AI for work-related tasks, while 12% relied on it regularly.

Despite its efficiency, generative AI is inherently flawed. These models generate responses based on statistical patterns rather than verified facts, leading to the risk of false information—what AI researchers call “hallucinations.” This issue is particularly problematic in legal settings, where accuracy is paramount.

Legal ethics rules make it clear that attorneys are responsible for vetting and verifying their court filings, regardless of how they were generated. The American Bar Association (ABA) has warned its 400,000 members that even unintentional misstatements produced by AI tools can have serious professional consequences.

Andrew Perlman, dean of Suffolk University’s law school and a proponent of AI’s role in legal work, argues that lawyers who fail to verify AI-generated citations are displaying “incompetence, pure and simple.”

Notable Legal Sanctions Due to AI Misuse

The misuse of AI in legal filings has already led to financial and professional penalties. In June 2023, a federal judge in New York fined two attorneys $5,000 after they submitted a personal injury case against an airline containing non-existent case citations generated by AI.

Other high-profile cases include:

  • Michael Cohen’s Legal Filing (2023): The former lawyer for Donald Trump inadvertently provided his attorney with fake case citations generated by Google’s AI tool Bard, which were subsequently submitted in court. Though no sanctions were issued, the judge called the mistake “embarrassing.”
  • Texas Sanction (November 2023): A federal judge in Texas fined a lawyer $2,000 and ordered them to attend a training course on AI in the legal field after they included fake case citations in a wrongful termination lawsuit.
  • Minnesota Deepfake Case (January 2024): A misinformation expert destroyed his credibility in court after citing fabricated AI-generated case law in a lawsuit involving a parody of Vice President Kamala Harris.

The Need for AI Literacy in the Legal Profession

While the growing reliance on AI tools has sparked controversy, legal scholars argue that the technology itself is not the issue—rather, it is the lack of AI literacy among lawyers. Harry Surden, a professor at the University of Colorado Law School, stresses that attorneys must take time to learn both the strengths and limitations of AI tools before integrating them into their practice.

See Also
Porsche Taycan GTS Achieves Guinness World Record for Longest Ice Drift by an EV

“Lawyers have always made mistakes in filings, even before AI,” Surden points out. “This is not new.”

However, given the speed at which AI-generated misinformation can spread, law firms must implement stricter guidelines to ensure that any AI-assisted legal research undergoes rigorous verification before submission. Morgan & Morgan’s warning to its lawyers is just one example of how law firms are attempting to mitigate these risks proactively.

The Future of AI in Legal Work

Despite these challenges, AI is likely to remain an integral part of the legal profession. Industry leaders believe that as AI technology evolves, more robust safeguards and verification mechanisms will be implemented to reduce the risk of AI hallucinations in legal filings.

Some law firms are now developing custom AI tools designed specifically for legal research, with built-in fact-checking capabilities. Meanwhile, organizations like the American Bar Association continue to provide new ethical guidelines to help attorneys navigate the use of AI responsibly.

As courts continue to grapple with the legal implications of AI-generated errors, attorneys will need to exercise greater caution when relying on AI tools for research and drafting. While AI has the potential to improve efficiency, failing to verify its outputs could result in career-ending consequences for legal professionals.

In the fast-changing world of AI and law, one thing is clear: lawyers must adapt, or risk facing serious repercussions in the courtroom.

View Comments (0)

Leave a Reply

Your email address will not be published.

© 2024 The Technology Express. All Rights Reserved.