PDF version

USE OF ARTIFICIAL INTELLIGENCE BY COURT USERS TO HELP THEM PARTICIPATE IN COURT PROCEEDINGS

A Statement from the Action Committee

Our Committee supports Canada’s courts in their modernization efforts. It provides guidance for addressing challenges, and highlights opportunities and innovative practices to modernize court operations and improve access to justice for court users.

1. CONTEXT

The proliferation and rapid evolution of artificial intelligence (AI) tools have raised questions and concerns around their application to court processes. Many court users would like to use this technology to enhance their participation in courts and reduce their costs. In this context, they look to courts for guidance on how they may do so appropriately and how they can expect use of AI by other court users to be treated; and for reassurance regarding courts’ overall approach to AI. 

This document aims to support courts in responding to the use of AI by court users, including litigants, counsel, and others engaging directly with the courts. It outlines possible benefits and risks associated with this use, provides an overview of key features characterising responsible AI use by court users, to help courts monitor this use effectively, and highlights important operational considerations in developing guidance for court users. While the primary audience of this guidance is judges and court administrators, court users may also find the information that it contains useful.

As stated in the companion piece on Demystifying AI in Court Processes, this guidance is exclusively focused on lower risk areas of use: administrative functions, legal analysis, and research. Because of the additional complexities associated with these domains, it does not cover the use of AI in decision-making or matters related to evidence, e-discovery, or substantive law. In its recent guidance on AI, the Canadian Judicial Council has indicated that judges’ decision-making authority should never be delegated to AI.

2. BENEFITS AND CHALLENGES

2.1 Benefits to the use of AI by court users

Improved access to justice: AI can help court users participate in court processes in a variety of ways. Counsel can use AI tools to gain efficiencies in file management, legal research, and document review, which can in turn reduce costs passed on to their clients. Other court users can leverage AI to find relevant information faster, more efficiently, and in their preferred language; to help them prepare court materials; and to more easily navigate court processes.

Strengthened administration of justice and confidence: A careful and measured approach to AI that is robust enough to provide certainty to court users, while remaining open to relevant applications of the technology, can strengthen public confidence in the administration of justice and, as a result, the confidence of court users.

2.2 Challenges and risks to the use of AI by court users

Common misunderstandings: Recent advancements in the field of AI, as well as the resulting potential for its use in far wider and more complex settings than ever before, have led to significant societal interest in AI. The distinction between present or near-term feasibility and possible future developments – and the related benefits and risks – are easily blurred, which can lead to misunderstanding and mistrust. Public discourse, along with each court users’ specific context, will shape their preconceived notions about AI.

Pace of development and accessibility: AI presents a unique challenge to both courts and court users because of its continuous, rapid evolution, and the speed at which new tools become broadly available to the public. This has put pressure on courts to respond quickly to novel, complex issues reactively rather than proactively. Court users might get overwhelmed with the abundance of tools available without properly understanding which ones are most appropriate to use. They may also opt to use free tools without realizing that they carry a greater level of risk because they are not specifically developed for legal purposes.  

Accuracy of output: Some AI tools can be used to produce fake court documents that appear real, such as a fraudulent court order from a different jurisdiction. AI tools – especially those not trained on legal data – can also “hallucinate” when directed to provide relevant jurisprudence to court users, fabricating non-existent cases. Output that is the result of biased data or algorithms can perpetuate harm to historically disadvantaged groups. All of these situations can result in submitting inaccurate materials to courts, but the flaws in these materials may not be immediately or easily detected. As these examples show, such risks are particularly heightened when it comes to tools using generative AI (GenAI) to produce new content.

Compromised administration of justice and confidence: An approach to AI that is unclear, raises uncertainty, or fails to meaningfully consider important risks or opportunities could negatively impact the courts’ truth-seeking and adjudicative functions and the resulting confidence of court users. Rejecting all potential uses of AI without deeper analysis prevents courts from the possibility of enhancing their operations to the benefit of court users.

3. UNDERSTANDING AI USE BY COURT USERS

Courts will be in a better position to understand how use of AI may arise in court proceedings if they appreciate how different court users may approach the use of AI tools, as well as the associated level of risk for different uses.

3.1 The background of court users impacts their use of AI

  • Counsel and other legal professionals must comply with certain professional responsibilities, such as duties relating to technological competence, client confidentiality, and not misleading the court or other parties. These duties should guide their use of AI and validation of any information they provide to the court. Since other court users do not have the same obligations, courts may need to exercise greater oversight and ask them more pointed questions about their use of AI.
  • Court users will approach AI with different levels of technological and AI knowledge. Those familiar with the field will better understand the implications of using different AI tools, whereas those with no prior knowledge or experience might fail to appreciate potential risks.
  • A court user’s legal knowledge impacts their ability to use AI tools effectively in the court context. For example, experienced counsel using GenAI to draft submissions will be better able to check for accuracy and correct any errors, while less experienced counsel or court users with no legal background may not be aware of any inaccuracies.

3.2       The type of AI and the purpose for which it is used impacts risks

  • Court users may use AI tools to accomplish a variety of tasks aimed at improving efficiency and accuracy. These include obtaining information (simple answers to questions or more extensive research), reviewing and comparing materials, translating materials, or creating new content.
  • While competent human oversight is always part of responsible AI use, its importance is heightened as the extent to which a tool creates entirely new content increases. Tools that create new content carry a greater risk of reproducing inaccurate and harmful material than those that provide basic functions like retrieving information, transcribing audio to text, or filling forms.
  • Some tools offer built-in features to aid with fact-checking, such as listing hyperlinks to source data, which can streamline the oversight process for court users.
  • The identity of the developer, the purpose for which the tool was created, and its training data and algorithm all impact the level of risk associated with its use. An AI tool developed by legal or judicial experts and built specifically to respond to the needs of these professionals or other court users is generally more reliable than a generic tool available on other platforms. Use of a culturally sensitive tool can also avoid inaccurate or inappropriate output for cases concerning Indigenous or other minority communities.
  • Data protection and cybersecurity can be compromised if sensitive information is entered into an AI platform that does not apply proper safeguards. Court users should be made aware of the risks of entering such data into AI tools, especially those available for free with no contractual obligations on the part of the provider.
  • The lack of transparency of an AI tool – also known as “black box AI” or the “black box problem” – increases risks. Output produced through processes that are not explainable or understandable to humans, or through unknown training data, can make it more difficult for court users to review the result and ensure it is correct.

4. OPERATIONAL CONSIDERATIONS: DEVELOPING GUIDANCE FOR COURT USERS ON THE UES OF AI

Court users require clear, unambiguous, and accessible guidance to use AI effectively and avoid presenting inaccurate information to the court. Here are some suggested tips for developing effective guidance:

  • Avoid excessive technical jargon and establish a common foundation using high-level, plain language definitions of key terms.
  • Explicitly define the document’s scope of application, including what is not covered, what is, and what coverage means in practice.
  • Distinguish between the use of GenAI and other AI tools: the need for court users to exercise caution and courts to exercise oversight will generally arise in the context of GenAI.
  • Explain that generic tools, such as ChatGPT, are less reliable than those developed by legal experts, and refer to reliable legal tools available to court users, including free sources.
  • Avoid overly broad statements that are difficult to comply with by framing guidance around the main issue it seeks to address, such as preventing inaccurate information from being presented to the court.
  • Revise guidance and notices regularly to reflect the evolving field of AI, and archive or remove obsolete versions to avoid confusion.

Successful guidance will reassure court users of appropriate ways to use AI and how any AI-informed material will be received by the court. In this regard, while recognizing the different skills and professional responsibilities of different court users, courts may wish to remind users:

  • That review of AI-produced material by a competent human is an important step in helping prevent inaccurate, discriminatory, or culturally insensitive information being submitted to court.
  • That court users should understand the benefits and limitations of any AI tools they are using and be prepared to explain to the court what tools they used and for what purpose. This reflects the court’s overall cautious yet realistic approach, reinforces the presence of challenges and risks, and assigns responsibility to the court user regarding tool selection and use. The degree to which courts may seek further explanation or hold court users accountable will be context-dependant. For example, courts would reasonably expect counsel to bear greater responsibility in their use of AI than self-represented litigants.
  • That it is their responsibility to confirm, to the best of their ability, that all materials they submit, including any produced using AI, are accurate. Such a reminder balances transparency in AI use with practical realities.