Around the world, lawyers used fake references from ChatGPT, an AI language model. In South Africa, lawyers arguing in the Johannesburg regional court, who used fake references from the AI tool, have faced criticism in a judgment, as per IT News Africa. The court ruling declared that the names, citations, facts, and decisions presented by the lawyers were entirely fictitious, imposing punitive costs on the lawyers’ client as a consequence.
Magistrate Arvin Chaitram highlighted the importance of independent reading in legal research, emphasising the need for a balanced approach that combines modern technology with traditional research methods. The case involved a woman suing her body corporate for defamation, and the lawyers’ reliance on AI-generated content instead of conducting thorough research raised concerns about misleading citations.
Also read: Closing the Digital Gap: Artificial Intelligence as a Catalyst
During a two-month postponement, the lawyers attempted to locate the references cited by ChatGPT but discovered their inaccuracy and irrelevance. Magistrate Chaitram ruled that the lawyers exhibited overzealousness and carelessness rather than intentional deception, resulting in a punitive costs order as the only consequence.
Similar incidents have occurred in the United States, highlighting the dangers of uncritically relying on AI-generated content without verifying its accuracy. These incidents highlight the need for legal professionals to critically evaluate AI-generated information, ensuring authenticity and relevance through a balanced approach to legal research that incorporates technological efficiency and independent reading.
AI Therapists Boosting Mental Health But Human Empathy Remains Vital
Follow us on Google News.