MyPillow CEOs lawyers file AI-generated legal brief riddled with errors
Lawyers for MyPillow CEO Mike Lindell used AI to generate an error-ridden brief, and are now facing potential disciplinary action.


Lawyers for MyPillow CEO and presidential election conspiracy theorist Mike Lindell are facing potential disciplinary action after using generative AI to write a legal brief, resulting in a document rife with fundamental errors. The lawyers did admit to using AI, but claim that this particular mistake was primarily human.
On Wednesday, an order by Colorado district court judge Nina Wang noted that the court had identified almost 30 defective citations in a brief filed by Lindell's lawyers on Feb. 25. Signed by attorneys Christopher Kachouroff and Jennifer DeMaster of law firm McSweeney Cynkar and Kachouroff, the filing was part of former Dominion Voting Systems employee Eric Coomer's defamation lawsuit against Lindell.
"These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist," read Wang's court order.
The court further noted that while the lawyers had been given the opportunity to explain this laundry list of errors, they were unable to adequately do so. Kachouroff confirmed that he'd used generative AI to prepare the brief once directly asked about it by the court, and upon further questioning admitted that he had not checked the resultant citations.
As such, the court ordered the lawyers to provide an explanation as to why Kachouroff and DeMaster should not be referred to disciplinary proceedings for violating professional conduct rules, as well as sanctioned alongside Lindell and their law firm.
Featured Video For You
Lawyers may face disciplinary action over use of AI
Responding to the order on Friday, the lawyers stated that they had been "unaware of any errors or issues" with their filing, so were "caught off-guard" and unprepared to explain themselves when initially questioned by the court.
Having now had time to assess the situation, they now claim that the document in question was actually an earlier draft which DeMaster had filed by mistake. Submitting alternate versions of the brief in support of this argument, the lawyers also presented an email exchange between Kachouroff and DeMaster in which they discussed edits.
"At that time, counsel had no reason to believe that an AI-generated or unverified draft had been submitted," read their response. "After the hearing and having a subsequent opportunity to investigate [the brief], it was immediately clear that the document filed was not the correct version. It was a prior draft.
"It was inadvertent, an erroneous filing that was not done intentionally, and was filed mistakenly through human error."
The lawyers further contend in their filing that it is perfectly permissible to use AI to prepare a legal filing, arguing that "[t]here is nothing wrong with using AI when used properly." Kachouroff stated that he "routinely" analyses legal arguments using AI tools such as Microsoft’s Co-Pilot, Google (presumably Gemini), and X (presumably Grok), though noted that he is the only person at his law firm to do so. He also stated that he had never heard the term "generative artificial intelligence" before.
The lawyers asked that they be allowed to refile their corrected brief, as well as that the potential disciplinary action be dismissed.
This incident is just the latest in a growing list of legal professionals inappropriately using AI in their work, some of them not even understanding the technology. In June 2023, two attorneys were fined for citing non-existent legal cases after they'd used ChatGPT to do their research. Later that year, a lawyer for disbarred former Trump attorney Michael Cohen was caught citing fake cases said client had generated with Google Bard. Then in February, yet another attorney appeared to cite cases fabricated by ChatGPT, prompting their law firm Morgan & Morgan to warn employees against blindly trusting AI.
Yet despite such cautionary tales, it seems that many lawyers still haven't gotten the message to steer clear.