Lawyers in New York Used ChatGPT and Now Face Possible Sanctions
BY Bigger Law Firm Magazine
- What should have been a routine personal injury case is now a warning to lawyers.
- ChatGPT invented bogus cases that were then cited.
- The attorneys involved now face sanctions.
LISTEN
Several lawyers are under scrutiny and face potential sanctions after utilizing OpenAI's advanced language model, ChatGPT, for the drafting of legal documents submitted in a New York federal court. The attention surrounding this matter stems from the erroneous citation of non-existent or irrelevant cases by ChatGPT.
The adoption of AI in legal practice is not a novel concept, but its usage has seen a dramatic surge in recent years. Machine learning has now demonstrated its proficiency in handling a variety of legal tasks, spanning from intricate legal research to automated document creation and analysis of complex contracts. Among these AI tools, OpenAI's ChatGPT, armed with cutting-edge natural language processing, has proven to be remarkably adept at generating compelling legal arguments but it is not a replacement for legal expertise and legal research.
In this case, the attorneys employed ChatGPT for the creation of sections of a federal court filing. The text included several case law citations. However, upon careful examination, the court found that some of the cases cited were either non-existent or unrelated to the matter under consideration. These erroneous citations were seemingly generated by ChatGPT, leading to them being labeled as "bogus."
According to CNBC, "Roberto Mata’s lawsuit against Avianca Airlines wasn’t so different from many other personal-injury suits filed in New York federal court. Mata and his attorney, Peter LoDuca, alleged that Avianca caused Mata personal injuries when he was “struck by a metal serving cart” on board a 2019 flight bound for New York.
Avianca moved to dismiss the case. Mata’s lawyers predictably opposed the motion and cited a variety of legal decisions, as is typical in courtroom spats. Then everything fell apart.
Avianca’s attorneys told the court that it couldn’t find numerous legal cases that LoDuca had cited in his response. Federal Judge P. Kevin Castel demanded that LoDuca provide copies of nine judicial decisions that were apparently used.
In response, LoDuca filed the full text of eight cases in federal court. But the problem only deepened, Castel said in a filing, because the texts were fictitious, citing what appeared to be “bogus judicial decisions with bogus quotes and bogus internal citations.”
It's crucial to note that ChatGPT, by its very nature and programming, does not possess the capability to discern the legal relevance or validity of a specific case law. It generates text based on the patterns and data it has been trained on. This means that if the training set contained inaccurate or non-existent data, ChatGPT can produce a corresponding text. In this case, it seems that the AI suggested spurious or unrelated case laws, which the attorneys failed to cross-verify before inclusion in their court filing.
This misstep has resulted in the lawyers involved being on the precipice of potential sanctions. The requirement for accuracy and relevance of case law citations in the legal profession is paramount. Any submission of irrelevant or false case law undermines the legal argument, wastes the court's time, and can misdirect the process of adjudication. Actions of such nature are usually met with sanctions, serving as a warning to other lawyers.
LATEST STORIES