We have seen explicit deepfake images of celebrities created by artificial intelligence (AI). AI is also contributing to music, creating driverless race cars, spreading misinformation, and more.
It is therefore no surprise that AI is also having a major impact on the legal system.
It is well known that courts must decide disputes based on the law submitted to the court by lawyers as part of their client's case. Therefore, it is of great concern that fake laws invented by AI are being used in legal disputes.
This not only raises questions of legality and ethics, but also threatens to undermine faith and trust in the world's legal systems.
How do false laws come about?
There is little doubt that generative AI is a powerful tool with the potential to transform society, including many aspects of the legal system. However, its use involves responsibility and risk.
Lawyers are trained to apply their specialized knowledge and experience carefully and generally do not take significant risks. But some careless lawyers (and self-styled litigators) are being caught by artificial intelligence.
AI models are trained on large datasets. You can create new content (both textual and audiovisual) in response to user prompts.
Although content generated this way can look very convincing, it can also be inaccurate. This is the result of an AI model attempting to “fill in the gaps” when training data is insufficient or defective, and is commonly referred to as “illusioning.”
In some situations, generative AI hallucinations are not a problem. Indeed, it can be considered an example of creativity.
But if AI is used in legal proceedings by creating hallucinations or inaccurate content, especially when combined with the time pressures on lawyers and the lack of access to legal services for many, That becomes a problem.
This powerful combination can lead to carelessness and omissions in legal research and document preparation, leading to reputational issues in the legal profession and a lack of public confidence in the administration of justice.
it's already happening
The most well-known generative AI “fake lawsuit” is the 2023 US case Mata v. Avianca. In this case, lawyers filed briefs in New York courts that included false excerpts and citations of the case. This overview was researched using his ChatGPT.
The lawyers were unaware that ChatGPT could cause hallucinations and did not confirm whether the incident actually existed. The results were dire. When mistakes are revealed, courts can dismiss clients' cases, sanction lawyers for acting in bad faith, fine lawyers and their firms, and expose their actions to public scrutiny. did.
Despite the notoriety, other fake cases continue to surface. Michael Cohen, Donald Trump's former lawyer, filed his own legal case generated by Google Bard, another generative AI chatbot. He believed they were real (they weren't) and that his lawyer would fact-check them (he didn't). His attorneys included these incidents in briefs filed in U.S. federal court.
Fraud cases have also surfaced in recent issues in Canada and the United Kingdom.
If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine public trust in the legal system? Continuing to err on the side of caution can mislead and crowd the courtroom, harm clients' interests, and generally undermine the rule of law.
What is being done about it?
Around the world, regulators and courts are responding in a variety of ways.
Several state courts and courts in the United States have issued guidance, opinions, or orders regarding the use of generative AI, ranging from responsible implementation to outright bans.
The Law Societies of England and British Columbia and the courts of New Zealand have also produced guidelines.
In Australia, the NSW Bar Association provides a generative AI guide for barristers. The Law Society of New South Wales and the Law Institute of Victoria have published an article on responsible use in line with the Rules of Conduct for Solicitors.
Many lawyers and judges, like the general public, will have some understanding of generative AI and will be able to recognize both its limitations and benefits. However, some people may not be so aware. Guidance would definitely help.
But a forced approach is required. Lawyers who use generative AI tools cannot treat them as a substitute for their own judgment and diligence and must verify the accuracy and reliability of the information they receive.
In Australia, courts are required to adopt practice notes or regulations setting out expectations when generated AI is used in litigation. Court rules can also guide the litigants they represent and signal to the public that the court recognizes the problem and is working on it.
The legal profession can also adopt formal guidance to promote the responsible use of AI by lawyers. At the very least, technology competency should be a requirement for lawyers' continuing legal education in Australia.
Setting clear requirements for the responsible and ethical use of Australian lawyer-generated AI will facilitate successful adoption and strengthen public confidence in the country's lawyers, courts and judicial administration as a whole. .
(author:(Michael Legg, Professor, UNSW Sydney Law School; Vicky McNamara, Senior Research Fellow, UNSW Sydney Legal Future Center)
(Disclosure statement:Vicki McNamara is affiliated with the Law Society of New South Wales (as a member). Michael Legg does not work for, consult, own shares in, or receive funding from any company or organization that might benefit from this article, and has no relevant affiliations other than an academic appointment. (also not disclosed)
This article is republished from The Conversation under a Creative Commons license. Read the original article.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)