The recent guideline on using Generative AI by the Hong Kong Judiciary allows the use of AI in the courtroom but also provides a good reference for its pitfalls in the legal world.
The hype on ChatGPT had been going on for a while. On 26 July 2024, the Hong Kong Judiciary published its guidelines on the use of generative AI. The “Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary” guides judges on what to look for when using these new technologies. However, it also contained a handy summary of using Generative AI, such as ChatGPT, in law.
It may raise eyebrows to see that judges are using AI in their work. Does that mean AI is now deciding on cases? Under the guidelines, that would be a firm negative, as judges must make their decisions “independently and personally” and any use of AI can only support them.
The guideline covers technical and practical aspects of the legal use of technology such as an AI chatbot. It also includes a glossary of AI-related terms for those still trying to understand these tools.
Limitations of AI in Law
The guidelines neatly summarised a few main issues with using generative AI to solve legal issues:
- make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist – a risk stemming from the fact that large language models can “hallucinate”
- provide incorrect or misleading information regarding the law or how it might apply
- make factual errors
- confirm that the information is accurate if asked, even when it is not
These are equally important pitfalls that legal practitioners and litigants must look out for. Especially for the former, as it is also a matter of professional misconduct to mislead the court with made-up cases and inaccurate information. This happens more often than one might imagine. In 2023, two US lawyers submitted fake court citations from ChatGPT in an aviation case. In 2024, a Canadian lawyer cited two non-existent case laws from ChatGPT in a family case. In both incidents, the lawyers faced serious backlash professionally.
Confidentiality
As the name suggests, AI has the ability to learn. In some tools, such as chatbots, the algorithm may learn from user input and prompts. Hence, inputting confidential information may result in a leak to other users of the tool. In 2003, Samsung employees unintentionally leaked company secrets, including source codes and internal meetings, to ChatGPT while using the tool. The guidelines also warned against breach of confidentiality in this type of situation.
What is it good for?
Despite the many pitfalls, generative AI can be useful for lawyers. The guidelines suggested four uses for the tool in a legal setting:
- Summarising information
- Speech writing
- Legal translation
- Administrative tasks, such as letter drafting
However, users must maintain their gatekeeper role. Furthermore, the use of AI in legal analysis is not currently recommended.
The court of Hong Kong’s recognition of the importance of generative AI is an important step in the right direction. The recent guidelines are valuable resources not only for judges but also for lawyers in Hong Kong.

Gordon Chan, Esq
Barrister-at-law, Archbold Hong Kong Editor on Public Health, and Member of the Bar Association's Committee on Criminal Law and Procedure. Specialised in medical, technology and criminal law.