USE OF ARTIFICIAL INTELLIGENCE BY COURTS TO ENHANCE COURT OPERATIONS
A Statement from the Action Committee
Our Committee supports Canada’s courts in their modernization efforts. It provides guidance for addressing challenges, and highlights opportunities and innovative practices to modernize court operations and improve access to justice for court users.
1. CONTEXT
As the field of artificial intelligence (AI) has rapidly evolved in recent years, courts across Canada and abroad are considering how it can be leveraged to enhance their operations and promote access to justice. While AI offers exciting opportunities to enhance courts’ public-facing and internal capacities, integrating AI into court operations should always account for the potential risks raised by such initiatives.
This document aims to support courts in determining whether and how to best use AI tools to enhance court operations. It outlines benefits and challenges, orienting principles, and operational considerations and stages for rolling out AI tools in the court context. As stated by the companion piece on Demystifying AI in Court Processes which seeks to promote a common understanding of key terms and basic concepts surrounding AI used in this document, this guidance is exclusively focused on lower-risk areas of use: administrative functions, legal analysis, and research. Because of the additional complexities associated with these domains, it does not cover the use of AI in decision-making, or matters related to evidence, e-discovery, or substantive law. In its recent guidance on AI, the Canadian Judicial Council has indicated that judges’ decision-making authority should never be delegated to AI.
2. BENEFITS AND CHALLENGES
2.1 Benefits of court use of AI
Increased efficiency and accuracy: AI has the potential to streamline a variety of time-consuming tasks by performing them more quickly and more precisely than humans. For example, it can automate many administrative tasks involved in case flow management such as docketing, scheduling, and document management, allowing court staff to reallocate time and effort to tasks requiring greater human intervention. AI is particularly well-suited to identify discrepancies and patterns in large volumes of data and reduce duplicate materials.
More targeted resource allocation: predictive analytics can allow courts to more accurately anticipate their resource needs and plan accordingly to maximize their impact. Savings could then be reinvested in other areas of need.
Improved access to justice: courts can use AI to enhance access to justice and court users’ participation in court processes in a variety of ways. For example, AI-assisted translation and transcription, while unofficial, can be leveraged by qualified jurilinguists to expedite the publication of official versions of court documents. Once these tools are perfected, they might also reduce costs for both courts and court users. Courts can also host chatbots on their public-facing websites to assist court users in navigating their processes, including in filling out electronic forms.
2.2 Challenges and risks of court use of AI
Barriers to access: The technologies AI relies on to function are not universally accessible. Some persons may lack the technology to access AI or the knowledge to use it effectively; geography and limited access to internet especially in northern communities may also be challenging; others may have physical or cognitive disabilities that prevent them from using AI. As such, courts integrating AI to enhance their operations should be mindful of the diversity of experiences of potential AI users – whether internal or from the public, depending on the tool – and should assess whether their needs are met. If not, alternative or supplementary tools should be considered to bridge any access gaps.
Inaccuracy, bias and discrimination: Depending on how they were designed and the data that informs their outputs, there is a very real risk that AI tools will generate inaccurate or incomplete information or perpetuate biases that lead to discrimination. Even without algorithmic bias, the way in which data is obtained and organized can reproduce and further entrench serious harm, in particular with respect to marginalized communities. Even approaches that aim for equity can suffer from insidious discrimination, which occurs when biases are unconsciously incorporated at preliminary stages of data collection and management as well as within AI design. For example, it is critical that any use of AI in Indigenous contexts be approached with particular care to appreciate and address such risk, given the extent to which the legacy of colonialism is present in existing legal sources.
Both humans and technology can be a source of bias. Awareness is the first step to identifying and mitigating such biases and preventing further discrimination. For example, data collection that excludes important context about Indigenous family or governance structures can reinforce existing biases and lead to harmful AI output when used in contexts that involve Indigenous litigants or legal issues.
Biases included in the source material used to train AI may also lead to unintended consequences for Indigenous people and other historically disadvantaged Canadian communities including, for example, Black Canadians. Inaccuracy could also arise where a generic tool does not have the capacity to function appropriately in the court context. For example, where it is unable to recognize and learn legal terms and concepts.
Data management: Data should not only be accurate and unbiased, but collected, retained and handled within a strong data management system. The optimization of AI tools relies on a foundation of accurate data that is formatted appropriately to be compatible with AI use. As a result, a court that relies on paper records, for example, would likely require intermediary steps to extract and digitize the data before it could integrate AI into its processes.
Transparency: AI tools that lack transparency pose challenges because humans do not understand how their output is produced. This can make it difficult to assess the accuracy of outputs, as well as the associated risks with the use of AI, such as misappropriation of sensitive information by corporate entities. This is particularly problematic for decision-making AI tools, although the concerns remain relevant, to a much lesser degree, with lower-risk tools developed for administrative purposes.
Privacy and cybersecurity: The sensitive nature of data used and generated by courts requires particular care to ensure that appropriate privacy and cybersecurity measures are upheld. Risks are heightened where an AI tool is not tailored to the court context, as it will not have been developed with these important considerations in mind. For example, courts should refrain from copying and pasting court information into generic, publicly available, tools like ChatGPT.
Loss of personal connection and rapport: As with all technologies, using AI tools for dispute resolution in a way that replaces or significantly alters the extent of human-to-human interaction could create barriers to earning the trust of court users. This is of particular concern in the Indigenous context where personal connection and relationships are of fundamental importance, and where confidence in the Canadian justice system can often be limited due to the legacy of colonialism.
3. ORIENTING PRINCIPLES
The following orienting principles are presented to assist courts in considering how they could responsibly use AI tools to enhance court operations. They are relevant throughout the lifecycle of AI.
3.1 AI is a tool, rather than an end in itself
AI will not be the appropriate solution to every problem and should not be used simply because it is new, exciting, or available. Possible use of AI should be founded on identifying the problem and assessing possible solutions – including other technologies or non-technological approaches, rather than simply integrating AI into ineffective processes.
3.2 AI should support access to justice and a user-centered approach
Serving the public fairly and effectively should guide all decisions related to the use of AI. Consider all potential users of the tool and incorporate their needs into its design, implementation, and monitoring. Consult affected users at all stages of the process to evaluate whether the tool is achieving expected outcomes, and promptly address any issues as they arise.
3.3 Human oversight of AI is essential
Review of AI output through competent human oversight is important at all stages for validating results and making any necessary corrections. The level of human oversight required will depend on various factors: For example, greater oversight may be required for tools not developed specifically for court or legal purposes. When developing tools for courts, greater oversight may be required in the early stages to evaluate accuracy.
3.4 Communication promotes accountability and confidence in the courts
Courts and court users have varied perspectives and concerns on the use of AI. Clearly communicating how courts are using and monitoring AI, how this will benefit users, and what safeguards are in place to avoid risks, can promote effective and appropriate use of AI and build confidence in the court’s processes. A court could retain the right to exclude use of any tool that it determines it is not equipped to handle; undertake to remain up-to-date with significant changes in the field; and communicate any necessary modifications to its approach to AI in a timely manner.
3.5 Appropriate data privacy and cybersecurity measures are needed
A strong data privacy and cybersecurity framework, including a clear protocol in the event of a breach, can mitigate risks associated with using an AI tool to store or process any sensitive information handled by courts. Consideration should be given to how AI-related policies or protocols fit within existing frameworks for information management and information technology.
3.6 AI is constantly evolving and requires continuous learning
Developing a basic understanding of fundamental AI concepts makes it easier to identify opportunities and risks and appreciate related issues. Keeping informed of developments in the field of AI is crucial to evaluating the appropriateness and continued effectiveness of any use of AI. Courts should be aware of any evolution in legislation, guidance, and best practices.
4. KEY STAGES TO ROLLOUT OF AI TOOLS IN THE COURTS
While AI raises both novel possibilities and concerns, a decision to implement an AI tool in court operations is generally no different than adopting any other new technology. This involves using a structured approach at every stage: from determining whether AI is appropriate, to designing the project, integrating the tool into the court’s operations, monitoring its use, and ultimately phasing it out or switching to a different tool.
4.1 Needs assessment and planning phase
Begin by identifying key challenges or areas for improvement. Focusing on problems first facilitates a solution-oriented approach which may or may not include AI, and reduces the risk of using new technologies regardless of how appropriate they might be. Understand the parameters of any existing data management system and technology, including advantages and areas for improvement. Next, study potential applications of AI – their opportunities, challenges, and risks – and carefully explore and compare alternatives to decide whether AI should be used. Build upon this knowledge to assess whether the proposed solution could be feasible, and to refine an understanding of relevant issues.
Consultation is a critical part of this assessment. Consider:
- Engaging subject matter experts to
- ensure the whole project team understands key concepts; and
- consider the appropriateness of a potential AI tool.
- Integrating input from communities that may be impacted by the AI tool.
- Ensure that consultations with marginalized communities are conducted in a culturally appropriate manner. For example, some communities have experienced trauma in relation to data collection that could inform perceptions of AI. For more information on engaging with court users, see the Action Committee’s publication on Gathering User Perspectives to Support Effective Court Operations.
Project team: Identify the necessary people, roles, responsibilities, and approval processes. If all expertise is not available in your court, seek outside support – this may be especially necessary in technologically complex areas. Develop a plan for how team members will interact, including any anticipated changes over time.
Internal collaboration: Ensure that both the judicial and administrative branches of the court are engaged and work together on an ongoing basis. Regular internal check-ins make it easier to respond promptly and cohesively to changes in technology, law, or policy.
Clear roadmap: To generate constructive discussion and support future conversations by laying a common foundation, create a clear, comprehensive roadmap of how AI will be used. This roadmap should consider both long- and short- term plans. It should also account for budget and resource realities. While AI may eventually offer efficiencies, the initial outlay – including the cost of developing or procuring an AI tool, as well as consultation and training – remains an important factor in planning.
4.2 AI project management phase
Whether a court decides to use an off-the-shelf AI tool, have one customized to fit its needs, or drive the creation of something unique, common elements should guide the design, deployment, and decommissioning stages of the process, also known as the lifecycle of AI (see Demystifying AI in Court Processes for definitions of these stages). The initial decision concerning the court’s level of involvement in developing the tool it plans to use will shape how each of the following are implemented in practice.
4.2.1 Data handling – throughout and post-decommissioning
- Ensure that the treatment of data related to the tool complies with all relevant legislation and policy surrounding data protection and cybersecurity. This includes data which informed a tool’s design, deployment, and decommissioning; outputs; training data; and any by-products created in the course of the lifecycle. Data retention and destruction should also be considered and aligned with pre-existing policies.
4.2.2 Design
- The purpose for which the tool was developed should respond to the court’s specific needs. This includes substantive legal aspects, practical realities, cultural sensitivity, and any necessary protections for sensitive information inherent to the court context. Courts should return to the tool’s purpose when shaping objectives, expected outcomes, and performance indicators to measure results against expectations.
- Consider the broader context of an AI tool, as well as its developer’s previous, current, or anticipated work. This includes potential conflicts of interest, level of expertise, and history of any major issues such as confidentiality or privacy breaches, or human rights violations.
- Tools are developed, refined, and evolve through testing and training. While this applies throughout the lifecycle of AI, it is initially performed by the developer at the design stage. Ideally, courts should be involved in testing early versions of the tool to ensure it responds to their needs. Where the court is not directly involved, it is important to ask questions to understand the testing and training process.
- For successful integration, a tool’s specific technical requirements must fit within the court’s broader systems and structures. For example, consider whether the AI tool is compatible with any other software with which it might need to interact.
4.2.3 Deployment
- Consider how deployment will be structured, and the extent of developer support required or desired at different stages.
- Trialing multiple tools simultaneously and under different conditions will help a court determine which best suits their needs. Problems with obsolescence and incompatibility can be avoided by being mindful of the longevity of an AI tool in addition to how different versions might interact.
- Piloting the chosen tool before a full launch allows for troubleshooting as necessary. Choose a diverse group of testers to better capture how the tool works for individuals with a range of experiences, including technological literacy and comfort level as well as subject matter expertise. For example, counsel with a deeper legal understanding will be better able to identify inaccuracies in AI-produced content; and individuals unfamiliar with basic technology may not maximize their use of an AI tool.
- Create a transition plan to minimize disruption and delay to the court’s work when the tool is deployed, as issues may still arise when use becomes more widespread and consistent. Retain alternative approaches to be available in parallel with the new AI tool during the transition period and engage in sound data management practices to avoid any data loss or confidentiality or privacy breaches.
- Offer initial and ongoing training on AI and its implications on court activities. Training needs will differ between the administrative and judicial branches of the court depending on how they will use the new tool and judicial independence considerations.
- Regular auditing of the tool should be performed by a team of technical, legal, and administrative experts to ensure that it operates correctly and remains an appropriate solution to the problem identified. This also provides an opportunity to integrate feedback and continuously refine the tool, depending on the court’s degree of involvement in its development. The tool’s intended purpose will be useful in framing a rigorous assessment of its performance (see above).
- Beyond initial procurement, any relationship between the developer and court should be clearly established and communicated. Incorporate the opportunity to provide feedback and request modification of an AI tool on an ongoing basis.
4.2.4 Decommissioning
- Establish clear parameters for decommissioning, which will be shaped by the degree to which the court was involved in developing an AI tool. This helps avoid a situation where a court wishes to continue using a tool that is no longer supported by the company that created it. Important elements include specifying criteria for decommissioning, considering potential impacts on other linked systems, retention periods, and how the process will be communicated to users of the tool.
- Date modified: