Guidelines on AI have urged judges to independently check and verify the accuracy of any information provided to them by a GenAI chatbot before relying on it.
The guidelines, from a judicial committee established by the Chief Justice Donal O’Donnell, cover generative artificial intelligence (GenAI), which includes tools such as ChatGPT.
, published on the Judicial Council’s website, says that judges must ensure that any use of GenAI tools is consistent with their “overriding obligation” to ensure the integrity and reliability of the legal research and analysis that underpins judicial decision-making.
It adds that any use of such tools by the judiciary must also be consistent with the “overarching obligation to protect the independence and integrity of the administration of justice and the protection of fundamental rights”.
“It may be useful to use public GenAI tools to assist you with aspects of your work, such as speeches or administrative or routine tasks, but it is not recommended to use such tools for core judicial work, such as conducting research to find new information which you cannot verify independently or for legal analysis,” the document states.
The guide warns that, even if the output of AI tools looks convincing, it may not be factually correct.
“Even with the best prompts, output that superficially appears convincing may be seriously inaccurate, incomplete, misleading, or biased,” it says.
It states that GenAI is neither confidential nor private, and warns judges against entering any private, confidential, or legally privileged information into a public AI chatbot – even one that they hold a subscription for.
The guidelines are also aimed at helping judges to identify and manage the use of GenAI by others in court proceedings.
They urge judges to check the accuracy of information contained in submissions or other documents that show signs of being produced by a GenAI chatbot.
It lists several ‘red flags’ that may indicate the use of AI – including submissions that use American spelling or refer to foreign or unfamiliar cases.
The document also warns that GenAI can also fabricate convincing images, audio, video, and other media that parties could present as evidence.
The committee says that the guidelines will be updated from time to time to reflect technological developments.