Images AI-generated.
Every profession and industry has been affected by the introduction of AI, and the practice of law is no exception. How can lawyers determine if something is AI-generated? And more importantly, once recognized, what should be done about it?
In March, 2024, a Florida attorney was suspended from practicing law in the U.S. District Court for the Middle District of Florida for one year after he cited non-existent case law generated by AI. (The Grievance Committee researched the attorney’s legal citations and found that he included inaccurate citations and fabricated authorities in his filings.) The court explained that the attorney had violated a variety of local rules and rules of professional conduct, including that he failed to act with reasonable diligence and made misrepresentations to the court. The court was alerted to this attorney’s use of AI when opposing counsel could not locate cases being cited.
Increasing Instances of AI in Pleadings
Recently, a colleague filed a motion for summary judgment in a premises liability case. As this colleague read the plaintiff’s response to the motion, he noticed the use of odd language patterns and sentence structure. The sentences did not flow well, and the response seemed to lack an understanding of the subject matter. Basically, it read like “word salad” – buzz words mixed with a variety of subjects and verbs. Fortunately, the parties reached an agreement before the reply to the response was due. But if it had not settled, what is the best way to handle this apparent misuse of AI in pleadings?
According to the Model Rules of Professional Conduct, Rule 5.5 prohibits the unauthorized practice of law. Using AI to generate a pleading implicates this rule because – as silly as it sounds – artificial intelligence has not been admitted to the Texas State Bar.
Motion to Strike an AI-generated Pleading
One way to address the AI-generated pleading is to file a motion to strike the pleading on the basis that the attorney who signed the motion is not the author. In Roberto Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), counsel for the defendant suspected plaintiff’s counsel’s use of AI and brought this to the judge’s attention in the form of a letter. Last summer, U.S. District Court for the Northern District of Texas Judge Brantly Starr announced that he now requires all attorneys appearing before him to file a Mandatory Certification, which is an attestation that AI was not used in drafting each filing, or alternatively, that any AI-generated portions of filings will be checked by humans to ensure accuracy.
Whether more judges will follow Judge Starr’s lead – and whether it curbs the use of AI in law – remains to be seen. There is currently a proposed amendment to local rule 32.3 in the Fifth Circuit, which contemplates a mandatory certification like the one Judge Starr implemented in the NDTX. Specifically, the proposed amendment would require counsel to certify that no generative AI was used in drafting the document being filed. A special committee is currently reviewing comments to the proposed amendment.