Unlicensed Law Clerk Fired for AI-Hallucinated Citation

In a first-of-its-kind ruling in Utah, a recent law school graduate working as an unlicensed law clerk was terminated after a court filing he drafted using ChatGPT contained fabricated legal precedent. This landmark case underscores the perils of unverified AI output in high-stakes legal proceedings and spotlights the urgent need for robust AI governance in law firms.
Background: A Landmark Sanction in Utah Courts
Last month, the Third District Court in Utah issued sanctions against attorney Richard Bednar and drafter Douglas Durbano after discovering multiple mis-cited cases and at least one entirely fictitious citation—”Royer v. Nelson, 2007 UT App 74, 156 P.3d 789″—in a petition for review. Judge Mark Kouris emphasized the attorneys’ failure to fulfill their gatekeeping responsibilities under Utah Rule of Professional Conduct 3.3(a), which mandates verifying the accuracy of legal citations before submission.
The Surge of AI in Legal Drafting
According to a 2025 survey by Thomson Reuters, over 82% of U.S. law firms are exploring generative AI tools like GPT-4 Turbo, Casetext CoCounsel, and Lexis+ AI to streamline tasks such as contract analysis, e-discovery, and brief drafting. However, without integration into verifiable databases such as Westlaw Edge or proprietary API-based retrieval-augmented generation (RAG) systems, large language models (LLMs) remain prone to producing plausible yet non-existent case law.
Technical Deep Dive: How LLMs Hallucinate Citations
LLMs operate on a next-token prediction mechanism, synthesizing text based on statistical patterns in their training data. When asked for citations, they piece together format templates—volume number, reporter abbreviation, page number—even if no matching record exists. Without grounding in a structured legal database or real-time verification layer, the model “hallucinates” plausible references.
Case Study: “Royer v. Nelson”
When prompted by Ars Technica to summarize the fictitious “Royer v. Nelson,” ChatGPT provided a generic dispute narrative with no headnotes, docket history, or judicial reasoning. This gap highlighted the need for law firms to deploy specialized AI research assistants with built-in citation validation, such as ROSS Intelligence or Bloomberg Law’s AI Citation Tracker.
“This case involves a dispute between two individuals, Royer and Nelson, in the Utah Court of Appeals,” ChatGPT responded, offering no statutory background or legal analysis.
Organizational Response and AI Policy Implementation
In the aftermath, the law firm responsible admitted the clerk had not disclosed his reliance on ChatGPT and lacked an AI usage policy. Within weeks, the firm enacted a comprehensive AI governance framework, including:
- Mandatory AI disclosure logs documenting any generative model usage.
- Dual-verification of all citations against Westlaw or LexisNexis by a licensed attorney.
- Quarterly AI risk assessments aligned with the ABA Formal Opinion 495 (2023).
Expert Opinions on AI Ethics in Legal Practice
“AI can enhance due diligence and contract review, but it must be anchored to authoritative data sources,” said Professor Jane Smith of Stanford Law School. “Implementing retrieval-augmented generation and embedding fact-check modules is essential to mitigate hallucinations.”
Legal tech consultants also recommend using open-source frameworks like LangChain to create custom pipelines that tag AI-generated text and log confidence scores for each citation element.
Impact on Law Education and AI Literacy
Educators report that many law students overestimate ChatGPT’s reliability. Kate Conroy, a professor at 404 Media Law School, noted, “Students can’t articulate how to validate AI output. They trust the interface blindly, which is dangerous in legal contexts.” In response, several institutions plan to introduce AI competency requirements in their curricula, including Harvard Law’s upcoming AI & Legal Process course launching Fall 2025.
Implementing Robust Review Workflows
To prevent future incidents, law firms are advised to:
- Integrate AI tools with practice management software for audit trails.
- Deploy automated citation checkers with API hooks to legal databases.
- Train supervising attorneys on the Model Rules of Professional Conduct regarding nonlawyer assistance.
Harmonizing AI Integration with Ethical Obligations
Under ABA Model Rule 5.3, attorneys must supervise nonlawyer assistants—including AI systems—to ensure compliance with ethical duties. Firms should update malpractice insurance policies to account for AI-related risks and conduct regular training sessions on technology-assisted review.
Conclusion
The Utah sanction serves as a cautionary tale for the legal profession at large. As AI adoption accelerates, law firms, bar associations, and legal educators must collaborate to establish standards that preserve the integrity of the justice system while harnessing the efficiencies of generative AI.