Lawyers must understand that generative AI tools are not designed for and should not be used for for legal research. If you try to use them, the algorithm may generate fake cases to fulfil the query, which could be detrimental. There have been many high-profile cases of lawyers being penalised for this. For instance, attorney Jae S. Lee was referred to a grievance panel by the Second Circuit for citing a fake case created by ChatGPT in a complaint. Additionally, the panel upheld a lower court’s decision to dismiss Lee’s client’s underlying complaint.
In their decision, the panel stated that “the brief presents a false statement of law to this court, and it appears that attorney Lee did not inquire, much less the reasonable inquiry required by Rule 11 and long-standing precedent, into the validity of the arguments she presented.”
Lee acknowledged having submitted a reply brief in September, which cited “Matter of Bourguignon v. Coordinated Behavioral Health Servs. Inc., 114 A.D.3d 947 (3d Dep’t 2014)” for the appellate case. However, the court could not find the case, which they later determined to be non-existent.
Lee explained that she couldn’t find a relevant case to establish a minimum wage for an injured worker and thus utilised the ChatGPT service to help identify a case. However, the panel found that attorneys are responsible for reading the cases they cite, and Lee failed to do so. The panel concluded that Lee’s brief presented a false statement of law to the court and made no inquiry into the validity of her arguments. While specific rules for AI usage may not be necessary, licensed attorneys should ensure the accuracy of their submissions to the court.
Source: Above the Law
Leave a Reply