
By Robert K. Jenner and Justin A. Browne
Artificial Intelligence (AI) is transforming legal practice, enhancing efficiency and fostering innovation. From legal research tools like Westlaw AI Co-Counsel to document automation with Adobe AI and voice transcription via Otter AI, technology has significantly streamlined litigation and trial preparation. However, these advancements come with substantial responsibilities and potential risks.
Recent court rulings have underscored the severe consequences of misusing AI in legal practice. This article explores the dangers of overreliance on AI, ethical considerations, and best practices for safely integrating AI into personal injury law firms.
The Perils of Overreliance: Lessons from Recent Court Cases
For a free legal consultation, call,
(888) 585-2188
While AI offers remarkable efficiencies, its misuse has led to real-world consequences for legal professionals. Several high-profile cases highlight the dangers of overreliance on AI without proper verification. These cases demonstrate how attorneys, trusting AI-generated content without cross-checking for accuracy, faced sanctions, disciplinary actions, and reputational damage. The following examples underscore the critical importance of maintaining professional diligence, even when leveraging advanced technology.
- Wadsworth v. Walmart Inc. and Jetson Electric Bikes, LLC, Civ. No. 2:223-cv-00118-KHR (D. Wy. Feb. 6, 2025): In a head-shaking case, plaintiffs’ attorneys cited nine cases in motions in limine—only one was real. The others were AI-generated hallucinations. The court issued an order to show cause why these attorneys should not face sanctions. (A saga in progress).
- Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023): Attorneys used ChatGPT to draft a brief without verifying citations. When questioned by the court, they asked ChatGPT if the cases were real and submitted affidavits attesting to fabricated case law. The court sanctioned them.
- Gauthier v. Goodyear Tire & Rubber Co., Civ. No. 1:23-CV-281 (E.D. Tex. Nov. 25, 2024 (E.D. Tex. 2024): Most recently, in November 2024, a Texas lawyer faced severe consequences for submitting AI-generated legal arguments without proper fact-checking.
- People v. Crabill, 2023 WL 8111898 (Colo. Sup. Ct. Nov. 22, 2023): An attorney who misrepresented AI-generated content lost his license for a year for his failure to verify AI-generated case law, his subsequent misrepresentation to the court, and his violation of professional conduct rules, including competence, diligence, candor toward the tribunal, and honesty. Courts are increasingly unwilling to tolerate lawyers failing to verify AI output.
Why AI Hallucinates and the Legal Risks It Poses
AI models generate text based on patterns rather than true understanding. When asked for legal citations, AI may “fill in the gaps” by creating plausible but fake case law. This phenomenon, known as AI hallucination, can lead to severe professional consequences if attorneys fail to verify their sources.
Click to contact our personal injury lawyers today
What makes hallucinations particularly dangerous is their convincing nature. AI-generated citations often mimic the style, structure, and language of authentic legal documents, making them difficult to distinguish from legitimate sources at first glance. This veneer of credibility can lull even seasoned attorneys into a false sense of security, increasing the risk of submitting inaccurate information to the court. The key to mitigating this risk lies in rigorous cross-checking with trusted legal databases and maintaining a healthy skepticism toward AI-generated content.
Advancements in AI Accuracy: Can RAG Fix This?
Complete a Free Case Evaluation form now
Retrieval-Augmented Generation (RAG) is an innovation designed to reduce hallucinations by pulling information from verified sources before generating text. While promising, studies (including one from Stanford University) show that hallucinations still occur. AI is improving but remains unreliable as a sole legal authority.
Despite its potential, RAG is not a silver bullet. Its effectiveness heavily depends on the quality of the external data sources it references. If the data repositories contain outdated, biased, or inaccurate information, RAG can perpetuate or even amplify these issues. Moreover, while RAG can help reduce hallucinations, it doesn’t eliminate the need for human oversight. Attorneys must continue to critically evaluate AI-generated content, verifying both the accuracy of the sources and the context in which the information is presented.
Ethical and Professional Responsibilities
The American Bar Association (ABA) Model Rules of Professional Conduct emphasize the importance of verifying AI-generated content:
- Rule 1.1 (Competence): Lawyers must stay informed about new technology and its risks.
- Rule 1.3 (Diligence): Attorneys must verify all court submissions.
- Rules 5.1 & 5.3 (Supervision): Lawyers are responsible for AI-generated content just as they are for work produced by junior associates.
- Rule 3.3 (Candor Toward the Tribunal): Any false statements of law or fact must be corrected immediately.
Failure to adhere to these ethical obligations not only jeopardizes the integrity of legal proceedings but also exposes attorneys to potential disciplinary actions. By maintaining competence, diligence, and proper supervision, lawyers can ensure that the integration of AI enhances their practice without compromising professional standards.
Best Practices for Using AI in Personal Injury Law Firms
Effectively incorporating AI into legal practice requires a strategic approach. While AI can streamline research, document drafting, and data analysis, its outputs must be handled with caution. Attorneys should not view AI as a replacement for legal judgment but as a tool to support and enhance their work. The following best practices are designed to help personal injury law firms leverage AI’s benefits while minimizing risks and maintaining the highest professional standards.
- Check Local Rules: Some courts require AI disclosures in filings. Know the landscape before you begin.
- Use AI as a Starting Point: Treat AI-generated content as a draft, not the final product.
- Select the Right Tool: Understand AI’s training data and scope. Not all AI platforms are suitable for legal analysis.
- Verify Citations: Always confirm case law using Westlaw or LexisNexis. AI-generated cases must be cross-checked.
- Implement a Review Process: Establish an AI review protocol, just as you would for junior associates.
- Stay Informed: Continuously educate yourself and your team about AI’s evolving capabilities and limitations.
Conclusion: AI Is a Tool, Not a Legal Authority
AI is here to stay, embedded in the tools legal professionals use daily. Ignoring AI entirely is not the solution—opposing counsel will certainly use AI-powered tools to their advantage. However, misusing AI or failing to properly verify its output can lead to serious consequences, including sanctions, lost cases, and disciplinary actions.
The lesson from Wadsworth, Mata, and Gauthier is clear: AI is a creative assistant, not an infallible oracle. The ultimate responsibility for accuracy, competence, and ethical compliance rests with the attorney.
By following best practices and staying vigilant, personal injury lawyers can harness AI’s potential without compromising professional integrity.
This piece was co-authored with Justin A. Browne, then reviewed/tweaked using ChatGPT and further human edited.
Call or text (888) 585-2188 or complete a Free Case Evaluation form