Lindell’s Lawyers Criticized for AI Brief with 30 Faulty Citations

In a striking admonition that underscores both the promise and peril of generative artificial intelligence in legal practice, U.S. District Judge Nina Y. Wang this week ordered MyPillow CEO Mike Lindell’s attorneys to explain why they should not face sanctions and professional discipline after submitting a court brief containing nearly thirty defective citations. Among the errors were misquotes of canonical opinions, misattribution of binding authority, and, most alarmingly, references to cases that do not exist.
Judge’s Findings on Defective Citations
In her Order to Show Cause filed April 23, 2025, Judge Wang catalogued the following defects in the February 25 Opposition to Plaintiff’s Motion in Limine:
- Misquotes of cited cases, including paraphrases presented as verbatim quotations.
- Misrepresentations of legal principles—opinions attributed elements that simply do not appear in the decisions.
- Erroneous assertions about whether cited authority was binding on the Tenth Circuit.
- Misattribution of an Eastern District of Kentucky decision to the District of Colorado.
- Citation of non-existent cases—e.g., Perkins v. Fed. Fruit & Produce Co. 945 F.3d 1242 (10th Cir. 2019).
Lead counsel Christopher Kachouroff and co-counsel Jennifer DeMaster admitted at an April 21 hearing that they used a generative AI system to draft the brief. When pressed, Kachouroff conceded he “may have made a mistake” but suggested such errors were a byproduct of an early-stage AI draft.
Case Background: Defamation Suit by Dominion Voting Systems Employee
The lawsuit, filed by Eric Coomer—former director of product strategy and security for Dominion Voting Systems—alleges that Lindell, his media outlet FrankSpeech, and MyPillow disseminated false and defamatory statements accusing Coomer of treason and conspiring to rig the 2020 election. Coomer’s motion in limine sought to exclude evidence that Lindell’s lawyers claimed bore on Coomer’s credibility, including personal matters unrelated to the alleged conspiracy.
Technical Underpinnings of Generative AI and Hallucinations
Generative AI models such as OpenAI’s GPT-4 and similar large language models (LLMs) are trained on massive text corpora—sometimes exceeding 1.5 trillion tokens. They learn statistical patterns in language but do not retrieve verifiable facts from a canonical database. This probabilistic generation often leads to hallucinations: confident but false statements, including fabricated case names, opinions, and citations.
According to a 2024 study by Stanford’s Center for AI Safety, state-of-the-art LLMs still exhibit a hallucination rate of 8–12% when asked for precise legal references. Absent rigorous human-in-the-loop verification, AI-enhanced briefs can introduce serious compliance risks for attorneys bound by professional conduct rules.
Integrating AI in Legal Workflows: Tools and Safeguards
Leading legal research platforms are racing to plug the gap. Thomson Reuters’ Westlaw Precision and LexisNexis’ Lexis+ AI Assistant now offer native citation-checking APIs that cross-reference AI drafts against verified case law databases. Key best practices include:
- Strict Version Control: Maintain audit trails of AI-generated drafts to isolate sections requiring manual review.
- Automated Citation Verification: Integrate real-time API checks to flag nonexistent or erroneous citations before filing.
- Human-in-the-Loop: Dedicate a certified legal editor to verify all AI-assisted output, especially quotations and holdings.
Regulatory and Ethical Considerations
The American Bar Association’s Formal Opinion 498 (2023) cautions attorneys that the unauthorized practice of law can occur if AI is used without adequate supervision. Model Rules of Professional Conduct 3.3 (Candor Toward the Tribunal) and 5.1 (Responsibilities of Partners, Managers, and Supervisory Lawyers) require lawyers to ensure all submissions are accurate and reliable.
Failure to adhere to these standards not only risks sanctions but can also expose practitioners to disciplinary proceedings before state bars. Judge Wang has set a May 5 deadline for Kachouroff and DeMaster to submit sworn declarations detailing the extent of AI use and their oversight measures.
Expert Opinions and Next Steps
Legal technology analyst Rebecca Liu of LegalTech Insights commented: “This ruling is a wake-up call. Generative AI offers efficiency but demands robust guardrails—particularly for high-stakes litigation.” Meanwhile, Colorado-based AI ethics researcher Dr. Aaron Martinez warns that “without transparent AI auditing, the legal profession may see a surge in misfiled briefs and appeals based on faulty precedents.”
The court’s final decision on sanctions will likely shape how law firms across the country implement AI tools and refine their compliance frameworks to avoid similar pitfalls.