Lawyers arguing a case in the Johannesburg regional court have faced criticism in a judgment for using fake references generated by ChatGPT, an AI language model.
According to a media source, the court ruling declared that the names, citations, facts, and decisions presented by the lawyers were entirely fictitious. The judgment also imposed punitive costs on the lawyers’ client as a consequence.
The Importance of Independent Reading in Legal Research
Magistrate Arvin Chaitram highlighted the need for a balanced approach to legal research, emphasizing that the efficiency of modern technology should be complemented by good old-fashioned independent reading.
This observation came in response to the situation where lawyers relied on AI-generated content instead of conducting thorough and independent research.
The Defamation Case and Misleading Citations
The case at hand involved a woman suing her body corporate for defamation. The counsel for the body corporate trustees argued that a body corporate could not be sued for defamation.
In response, the plaintiff’s counsel, Michelle Parker, stated that previous judgments had addressed this question, but they had not had sufficient time to access them. The court granted a postponement to allow both parties time to source the necessary information to support their arguments.
AI-Generated References Prove Inaccurate
During the two-month postponement, the lawyers involved attempted to locate the references cited by ChatGPT.
However, they discovered that while ChatGPT had provided real citations referring to actual cases, those cases were unrelated to the ones mentioned.
Moreover, the cited cases and references were not applicable to defamation suits involving body corporates and individuals. It was later revealed that the judgments had been sourced through ChatGPT, an AI language model.
Magistrate’s Ruling and Consequences
Magistrate Chaitram ruled that the lawyers had not intentionally misled the court but rather exhibited overzealousness and carelessness.
As a result, no further action was taken against the lawyers beyond the punitive costs order. Chaitram considered the embarrassment associated with the incident to be a sufficient punishment for the plaintiff’s attorneys.
Similar Incidents and Lessons Learned
The reliance on ChatGPT’s fictitious content is not exclusive to South Africa. In the United States, lawyers were recently fined for submitting a court brief filled with false case citations from ChatGPT. The lawyers and their firm faced consequences for submitting non-existent judicial opinions with fabricated quotes and citations.
These incidents serve as cautionary tales about the dangers of uncritically relying on AI-generated content without verifying its accuracy.
The case in the Johannesburg court and the US incident highlight the importance of critically evaluating AI-generated content, particularly in the legal field.
While AI tools can offer valuable assistance, legal professionals must exercise caution and verify the authenticity and relevance of the information provided. Maintaining a balance between technological efficiency and independent reading remains crucial for accurate and reliable legal research.