DATE DOWNLOADED: Fri Jun 13 00:54:00 2025
SOURCE: Content Downloaded from HeinOnline
Citations:
Please note: citations are provided as a general guideline. Users should consult their preferred
citation format's style manual for proper citation formatting.
Bluebook 21st ed.
Ryan Abbott & Brinson S. Elliott, Putting the Artificial Intelligence in Alternative
Dispute Resolution: How AI Rules Will Become ADR Rules, 4 AMICUS CURIAE 685 (Spring
2023).
ALWD 7th ed.
Ryan Abbott & Brinson S. Elliott, Putting the Artificial Intelligence in Alternative
Dispute Resolution: How AI Rules Will Become ADR Rules, 4 Amicus Curiae 685 (2023).
APA 7th ed.
Abbott, Ryan, & Elliott, B. S. (2023). Putting the artificial intelligence in
alternative dispute resolution: how ai rules will become adr rules. Amicus Curiae,
4(3), 685-706.
Chicago 17th ed.
Ryan Abbott; Brinson S. Elliott, "Putting the Artificial Intelligence in Alternative
Dispute Resolution: How AI Rules Will Become ADR Rules," Amicus Curiae 4, no. 3
(Spring 2023): 685-706
McGill Guide 10th ed.
Ryan Abbott & Brinson S. Elliott, "Putting the Artificial Intelligence in Alternative
Dispute Resolution: How AI Rules Will Become ADR Rules" (2023) 4:3 Amicus Curiae 685.
AGLC 4th ed.
Ryan Abbott and Brinson S. Elliott, 'Putting the Artificial Intelligence in
Alternative Dispute Resolution: How AI Rules Will Become ADR Rules' (2023) 4(3)
Amicus Curiae 685
MLA 9th ed.
Abbott, Ryan, and Brinson S. Elliott. "Putting the Artificial Intelligence in
Alternative Dispute Resolution: How AI Rules Will Become ADR Rules." Amicus Curiae,
vol. 4, no. 3, Spring 2023, pp. 685-706. HeinOnline.
OSCOLA 4th ed.
Ryan Abbott & Brinson S. Elliott, 'Putting the Artificial Intelligence in Alternative
Dispute Resolution: How AI Rules Will Become ADR Rules' (2023) 4 Amicus Curiae 685
x Please note: citations are provided as a general guideline. Users should
consult their preferred citation format's style manual for proper citation
formatting.
Cite this document
PinCite this document
-- Your use of this HeinOnline PDF indicates your acceptance of HeinOnline's Terms and
Conditions of the license agreement available at
https://heinonline.org/HOL/License
Amicus Curiae, Series 2, Vol 4, No 3, 685-706 (2023) 685
Special Section:
Al and its Regulation (Part 1), pages 685-750
PUTTING THE ARTIFICIAL INTELLIGENCE IN
ALTERNATIVE DISPUTE RESOLUTION: How Al
RULES WILL BECOME ADR RULES
RYAN ABBOTT
University of Surrey
BRINSON S. ELLIOTT
The Cantellus Group
Abstract
This article argues that the evolving regulatory and governance
environment for artificial intelligence (AI) will significantly
impact alternative dispute resolution (ADR). Very recently, AI
regulation has emerged as a pressing international policy issue,
with jurisdictions engaging in a sort of regulatory arms race. In
the same way that existing ADR regulations impact the use of
AI in ADR, so too will new AI regulations impact ADR, among
other reasons, because ADR is already utilizing AI and will
increasingly utilize AI in the future. Appropriate AI regulations
should thus benefit ADR, as the regulatory approaches in
both fields share many of the same goals and values, such as
promoting trustworthiness.
Keywords: artificial intelligence; online dispute resolution;
alternative dispute resolution; regulation; governance;
trustworthiness; transparency; fairness; diversity; explainability.
[A] INTRODUCTION
and use
The last year has witnessed a proliferation in the development
of artificial intelligence (AI). ChatGPT, a chatbot developed by OpenAI,
was recently recognized as the fastest-growing consumer application in
internet history, acquiring 100 million users between December 2022
and January 2023 (Gordon 2023). In February 2023, Columbian Judge
Juan Manuel Padilla Garcia posed several legal questions to ChatGPT,
including the chatbot's replies alongside his own ruling (2023) to "extend
Spring 2023
686 .icus Curiae
the arguments of the adopted decision" (Rose 2023).1 Following extensive
online debate, Judge Garcia remarked that while ChatGPT and other
technology programs should not be used to replace judges, they can
improve the efficiency of judicial proceedings by aiding in document
drafting and performing other secretarial tasks: "by asking questions to
the application, we do not stop being judges, thinking beings", he said
(Taylor 2023). Ironically, when asked by journalists abouts its role in
the judicial system, ChatGPT itself appeared more reluctant than Judge
Garcia, responding, "Judges should not use ChatGPT when ruling on legal
cases ... It is not a substitute for the knowledge, expertise and judgment
of a human judge" (Taylor 2023). If the swift uptake of ChatGPT for legal
support is any indication, there will soon be a greater influx of AI systems
in legal and alternative dispute resolution (ADR).
[B] BACKGROUND ON AIDR
Nearly 70 years after the term "artificial intelligence" was coined, it lacks
a generally accepted definition. The struggle to achieve consensus on
the definition of AI is symbolic of a larger struggle to achieve consensus
on appropriate AI regulatory and governance frameworks. We define
AI as an algorithm or machine capable of completing tasks that would
otherwise require cognition (Abbott 2020, 22). This definition centres on
AI's functionality rather than the specific way it is programmed, because
the law should focus on regulating AI behaviour (Abbott 2020).
Recent improvements in AI capabilities are due in large part to increases
in the availability of on-demand, voluminous and complex (structured
and unstructured) datasets, or 'Big Data', along with advancements in
software designs and computing power. Many of the recent and most
prominent breakthroughs in AI have relied on machine learning, a
particular sub-discipline of computer science that operates by using
statistical methods to make classifications or predictions. However, in the
ADR context, from the 1970s until recently, AI models were largely rule-
based or expert systems, requiring developers to foresee and manually
code all potential inputs and outputs relevant to a given dispute. For
example, one early ADR system utilizing AI (AIDR), developed by the
RAND Corporation in the 1970s and 1980s to support California product
liability settlements, modelled human litigators' and insurance adjusters'
1 Judge Garcia asked several questions, including, "Is an autistic minor exonerated from paying
fees for their therapies?" and "Has the jurisprudence of the constitutional court made favorable
decisions in similar cases?" The chatbot responded correctly: "Yes, this is correct. According to
the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their
therapies."
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 687
decision-making processes for a series of hypothetical disputes. The if-
then (input-output) rules mirrored a person's mental linking of facts and
conclusions, chained together by legal rules (Waterman & Peterson 1981).
The AI documented its reasoning in a decision tree, adjusting course in
response to new or altered facts, and ultimately providing visual evidence
of how it reached its conclusion (Waterman & Peterson 1981). Affirming
the technical sophistication needed to build a system capable of handling
even relatively straightforward disputes in a narrowly defined area with
known parameters, the RAND prototype required several thousand if-
then rules (Waterman & Peterson 1981). Relatively large-scale consumer
e-commerce systems, such as eBay's and PayPal's dispute resolution
systems from the early 2000s, operated in a similar rule-based fashion.
AIDR systems 2 have come a long way since these applications, and
demand has increased recently due to the Covid-19 pandemic that
restricted travel and face-to-face interaction, leading practitioners to
leverage online dispute resolution (ODR) systems incorporating some
degree of AI in document-sharing, video-conferencing and case-intake
technologies (Orr & Rule 2019; Rickard 2021). Some AIDR systems
also help facilitate or independently manage legal research, negotiation,
settlement, document drafting and decision support (Zeleznikow 2021).
There has been continued debate about whether and how best to
regulate ADR and AIDR (eg command-and-control regulations, self-
regulation, trust marks, clearing houses), and no specific regulatory
approach or centralized enforcement authority 3 has emerged (Liyanage
2013). This landscape has led some to conclude that there is little to no
regulation, authority, standards or monitoring, making ADR an "informal
system" (Menkel-Meadow 2013) and a "largely unregulated industry"
operating behind closed doors (Dore 2006; Hensler 2017). Commentators
point to the absence of agreed-upon and enforceable qualification and
licensing requirements, responsibilities and obligations, and behavioural
standards for neutrals (Rolph & Ors 1996; Menkel-Meadow 1997;
Hensler 2017),4 procedural safeguards of adjudication (Roberts 1993)
2 "Al systems" refers to the entirety of the Al lifecycle, including the models, composed of
algorithms and data, as well as the human, social, and industry context or ecosystem the Al operates
in or impacts.
3 There is "no national or centralised form of 'regulation' of dispute resolution in the US" (Menkel-
Meadow 2013).
4 "ADR itself is arguably a low governance field because in most countries practitioners are
unlicensed and the field is largely unregulated ... Standardization or regulation of any sort has
generally only applied to practitioners seeking to practice in official or public frameworks, such
as professional organizations and courts, which require certain standards of certain practitioners,
in particular for those practitioners involved in courtconnected mediation" (Ebner & Zeleznikow
2016).
Spring 2023
688 Amicus Curiae
and judicial review except in limited instances of neutral misconduct
(Dore 2006). Where private and court ADR rules of practice and ethics
exist, some argue that the "breadth, reach and enforcement mechanisms
for an ethics of ADR become highly pluralistic, substantively conflictual
and procedurally cumbersome" (Menkel-Meadow 1997).5 The absence
of formal procedural and institutional safeguards and enforcement
mechanisms has led some to question the quality of ADR in the absence
of regulation (Rolph & Ors 1996).
While ADR is not regulated in the same way or to the same extent
as conventional litigation or legal practice, there are a host of laws
that apply to ADR despite not being ADR-specific, such as professional
standards that apply to advocates and neutrals licensed to practise law
and working in ADR, or data protection laws that govern the use of certain
information in ADR proceedings. These rules may conversely apply to the
use and development of AI systems in ADR, and there are some existing
and emerging institutional governance and regulatory mechanisms that
set standards and expectations specifically for AIDR systems' design,
development and deployment.
Classifications, Applications and Impacts
How AI impacts ADR processes, disputants and the role of the third-
party negotiator, mediator or arbitrator (the "neutral") depends, among
other things, on the technology used, tasks executed and the level of
human oversight and intervention. It is helpful to consider AIDR systems
as existing on a spectrum (see Figure 1).
Partially Assistive Human-Directed Technology Technology-Directed
Technology-Managed
Human-Managed Technology Fully Assistive Fully Automated
Technology-Aided Partially Automated Automated
Decision-Making
Figure 1: Illustratingthe range of AIDR systems on the spectrum from
assistive to automated.
s "Governmental and other organizations in the United States are regulating ADR and TPs [third
parties], but the common regulatory approach is formalistic at best; mediators are subject to one set
of regulations, arbitrators another, and many of these rules apply only to court-attached procedures"
(Silver 1996).
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 689
Assistive technologies, which can support, inform or make
recommendations to neutrals, account for one end of this spectrum.
These technologies can expedite and improve ADR outcomes by
eliminating administrative and procedural impediments (eg document
management and drafting, communications, calendaring, travel) and
equipping neutrals with the informational resources (eg advanced legal
research) that they need to make accurate, informed decisions. Assistive
technologies are being leveraged in real time. Harvey, a large language
model-based platform, is assisting attorneys with contract analysis, due
diligence, litigation and regulatory compliance in several languages (Allen
& Overy 2023). The system is reportedly providing faster, improved and
cost-effective recommendations and predictions that attorneys can review
and verify (Allen & Overy 2023). Applied to ADR, such a system could
simplify and supplement the time- and resource-intensive aspects of
neutrals' work and help satisfy various procedural requirements, such as
by providing oral and written communications to disputants or decreasing
costs for human translators by providing first-pass translations.
The benefits offered by assistive technologies can accrue to disputants,
who may utilize ADR over traditional litigation due to its relative
efficiency, affordability and reliability (Carneiro & Ors 2014). Assistive
AIDR is therefore well equipped to satisfy ADR's core objective to provide
disputants with a fair, efficient and economical resolution process (United
Nations Commission on International Trade Law (UNCITRAL) Model Law
2006). Since neutrals retain control over the dispute resolution process
and sole authority over case outcomes, there is broad support in the ADR
literature for expanding the use and development of AI that assists or
enables neutrals in performing their work in line with generally accepted
ADR values (Zeleznikow 2021).
Automative technologies, which occupy the other end of this spectrum,
can partially or fully automate discrete tasks and, in some narrow
instances, even replace neutrals. Some applications include automated
negotiation, settlement, award and resolution plan drafting, and
decision-making. CoCounsel, released in March 2023, claims to be the
world's first-ever AI legal assistant (Casetext 2023). Users can delegate
"substantive, complex work" (paras 5-7) to the system, including legal
research, document and contract analysis, and deposition preparation
(Casetext 2023). Proponents of automative technologies note that, insofar
as AI can detect correlative patterns in large datasets with a speed, scale
and precision that often outpaces human ability, it could study previous
disputes and apply core features, rules and insights to future matters.
Spring 2023
690 4mi"mis Curiae
Equipped with these insights, neutrals could improve the accuracy 6 of their
decisions (Baryse & Sarel 2023). Or, with AIDR systems independently
resolving minor, straightforward disputes, neutrals could focus their time
on more complex matters.
Automated systems can also improve access to justice for self-
represented litigants by offering real-time, inexpensive legal advice and
explanation (de la Rosa & Zeleznikow 2021). Providing potential disputants
with an accurate forecasted case outcome empowers underrepresented
parties to make informed decisions about whether to pursue ADR
altogether, helping alleviate long-standing concerns about ADR favouring
disputants with more power and resources (Miller 2022). Studies have
also found that some individuals have an easier time confiding in an AIDR
system than a human neutral, either because there is a greater degree of
anonymity or because AI systems offer no (overt) feelings of judgment or
bias against identity traits (Orr & Rule 2019).' ADR participants are often
concerned about neutral bias and may select, for example, neutrals whose
nationalities differ from disputants' to promote impartiality (UNCITRAL
Mediation Rules 2021). ADR participants may similarly view AI as less
likely to be partial to a particular disputant or dispute domain, regardless
of whether that is a correct perception. Disclosure requirements vary
greatly between jurisdictions,8 which some commentators say prevents
parties from easily or inexpensively accessing information about neutral
misconduct or conflicts necessary to make an informed selection (Silver
1996; Dore 2006). Lacking any outward personal, financial or professional
interests, a well-trained and explainable AI system could operate as an
uninterested neutral.
Most existing automative systems are unable to perform significant
tasks independently or without any human oversight, however
(McKendrick & Thurai 2022). Many commentators have noted this
"implementation gap between those technologies which are proposed
and predicted within the field, and those which have been realized"
(Alessa 2022, 324). Moreover, despite automative technologies' potential
benefit to disputants and neutrals, there are significant costs and risks
associated with the adoption of automative ADR technologies, as we
6 For example, in 2017, an Al system developed by researchers at Cambridge University performed
with greater accuracy (870/) than a group of 100 experienced lawyers (620/) when predicting the
outcomes of 775 financial ombudsman cases (Tashea 2017).
7 Some scholars are exploring whether automated decision-making can de-bias judges (Chen 2019,
as cited in Baryse and Sarel 2023).
8 California has the most comprehensive disclosure requirement in the US, requiring disclosure of
a third party's past ADR work "to inform the disputants of a pattern of bias within an industry or
substantive dispute" (Silver 1996).
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 691
consider further below (Orr & Rule 2019; Rajendra & Thuraisingam
2021).9 In contrast to assistive technologies, automative technologies
face greater scepticism because their outputs can be used to determine
ADR case outcomes with little to no human oversight.
Many systems occupy the space between these two ends of the AIDR
spectrum. For example, British Columbia's Civil Resolution Tribunal
(CRT), an AI expert system, independently performs case intake,
management and communications and provides disputants with a
negotiation forum. 10 However, if disputants are unwilling or unable to
reach an agreement in an automated environment, the CRT will notify
a human tribunal member, who will then oversee the duration of the
resolution process. Other systems, such as SmartSettle, an AI negotiation
tool, can independently arrive at a compromise between disputants and
provide a recommended settlement to a human neutral." The neutral
may agree with the recommendation and provide it to the disputants, or
overrule it and make their own mediator's proposal. A system's position
on the AIDR spectrum is therefore not solely determined by its design
and capabilities, but also how and for what purpose(s) the technology is
used by parties and neutrals. Still, given the impact that even partially
assistive AI systems can have in dispute resolution, it may be useful
to think of AI as taking on an active "fourth party" role in the ADR
process (Katch & Rifkin 2001, as cited in Carneiro & Ors 2014), and
of AI developers as a "fifth party" due to their discretion in setting
AI's rules and logic and supplying its training data (Lodder 2006, as
cited in Carneiro & Ors 2014). Acknowledging AI and its developers as
active participants in the ADR process is critical to understanding the
technical, procedural and normative impacts of AI involvement.
Challenges and Risks for AIDR
AIDR systems based on machine learning can operate by detecting
correlative patterns in data, developing rules based on this analysis, and
applying those rules to new data. Unfortunately, this presents a weakness
in the dispute resolution context, as laws and rules do not provide "the
kind of structure that can easily help an algorithm learn and identify
patterns and rules" (Orr & Rule 2019, 9-10). Conflicts sometimes involve
9 Some automated negotiation support systems, which "do not automate the negotiation process
but provide IT support for complex negotiations, leaving the control over the negotiation process
with the human negotiators", are viewed as a limited exception (Schoop & Ors 2003, as cited in
Zeleznikow 2021).
10 Civil Resolution Tribunal, 'Soci ties and Cooperative Associations'.
" See Smartsettle Infinit.
Spring 2023
692 4mi"mis Curiae
multiple areas of law (eg tort, property, insurance, family) and concern
disputants located across international borders. In these cases, human
neutrals must identify relevant rules from disparate areas of law (and
perhaps legal systems) and interpret them against complex and disputed
fact sets. Conflicts of this nature do not lend themselves to "specialization
into specific case types" necessary for instructing AI (Orr & Rule 2019,
10). Add to this a dearth of sufficiently representative datasets due to
ADR confidentiality obligations, and it is even more difficult to train a
machine learning-based AI system to successfully navigate a complex
dispute without error and unfair bias.
Further calling into question AI's ability to independently resolve
disputes are capabilities lacking in such systems. Novel analysis and
interpretation may be required to determine standards or the application
of rules to new facts. Whether behaviour was "reasonable" or an outcome
"foreseeable" can depend entirely on subtle differences in context.1 2
Mediation, for example, often requires human neutrals to navigate social
and emotional issues, sometimes with underlying cultural differences
(Schmitz & Ors 2022). To assess disputants' reliability, neutrals regularly
depend on previous experiences, knowledge and normative judgements
(Waterman & Peterson 1981). AI may not be well equipped to successfully
automate the interpretive, human aspects of ADR, especially because
disputed facts are an inherent feature of many conflicts. While some AI-
powered lie detectors are better at discerning human credibility than
people (Shuster & Ors 2021), no existing system can do this reliably, and
several have been found to produce biased, discriminatory or otherwise
inaccurate results (Bittle 2020; Lomas 2021).
Concerns about AI accuracy, bias and fairness are significant given
the impact that AIDR outcomes can have on individuals' rights. Some AI
systems, colloquially referred to as "black boxes", can lack transparency
and explainability, meaning the logic according to which they make
predictions, recommendations or decisions is not explainable-at least
not in ways that make sense to system users. The use of such opaque
systems in legal or dispute resolution settings can undermine individuals'
right to a reasoned decision, as well as their right to challenge and appeal
from a decision, raising due process concerns.
For all these reasons, some critics conclude that "machine-made
justice" by automative technologies should never replace existing dispute
resolution processes by humans. They contend that technology can
12According to the RAND corporation, the "derivation of rules to describe such imprecise terms
would be among the more technically difficult tasks in developing a comprehensive rule-based
model" (Waterman & Peterson 1981,18).
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 693
neither substitute human reasoning and common sense nor achieve
fairness and justice in the ADR context (Condlin 2017). Others are open
to automation on a more limited basis, for certain high-volume, low-value
disputes, or those with relatively limited grounds for factual disputes and
developed bodies of law, such as certain traffic violations.
[C] EXISTING RULES AND STANDARDS FOR
AIDR
Even in the absence of AIDR-specific rules and standards, rules and
standards that apply generally to ADR also apply specifically to AIDR. For
instance, for over 50 years, the UNCITRAL has published conventions,
model laws and rules for international commercial trade law. The Model
Law on International Commercial Arbitration (amended in 2006), aimed
at developing harmonized international economic relations, has been
adopted in over 119 jurisdictions. While the Model Law is directed at
states, the UNCITRAL Arbitration (revised in 2010) Rules and UNCITRAL
Mediation (2021) Rules are rule sets that disputants can agree to use
in their ADR proceeding. While not the only set of ADR standards, the
UNCITRAL rules offer a globally accepted benchmark used by professional
associations, chambers of commerce and arbitral institutions.13
Though not drafted with AI in mind, several UNCITRAL arbitration
rules apply to AIDR, including requirements that neutrals must disclose
any conflicts of interest or biases undermining their impartiality or
independence; treat parties equally and provide reasonable opportunities
to present their cases; conduct hearings fairly and efficiently without
unnecessary delay and expense; determine the admissibility, relevance
and weight of evidence presented by disputants; and state the reasoning
upon which the award is based (UNCITRAL 2010).
In 2016, UNCITRAL articulated four principles that should underlie any
ODR process-fairness, transparency, due process and accountability-
and emphasized that existing ADR rules and standards, including
confidentiality, due process, independence, neutrality and impartiality,
apply equally to ODR (2016). UNCITRAL's Expedited Arbitration Rules
further affirm that technology uses are also subject to fair proceedings
rules, stating that neutrals should give disputants "an opportunity
to express their views on the use of such technological means and
consider the overall circumstances of the case, including whether such
technological means are at the disposal of the parties" (2021, 52).
13 UNCITRAL, 'Technical Assistance and Cor1naon.
Spring 2023
694 Amirus Curiae
The frameworks governing the ethical conduct of arbitrators (American
Bar Association (ABA) 2004) and mediators (American Arbitration
Association, & Ors 2005) also articulate agreed-upon expectations and
best practices for neutrals' obligations. In addition to those articulated
by UNCITRAL, several other ABA principles also apply to AIDR, including
prohibitions on neutrals acting with more or less authority than provided
by the agreement of parties or in a manner inconsistent with applicable
procedures and rules; requiring that decisions be made independently
and insulated from "outside pressure, public clamor, and fear of
criticism or self-interest" (ABA 2004, 4); and prohibiting non-accurate
or untruthful advertisements or the promotion of services and abilities
related to arbitration in a manner likely to mislead. In 2022, the ABA's
Dispute Resolution ODR Task Force developed a set of guiding principles
for ODR and thus, AIDR, namely that the process should be; accessible,
accountable, competent, confidential, equal, fair, impartial, legal, secure
and transparent (2022), adding additional considerations for court-
connected ADR systems.
[D] THE EMERGING GLOBAL AI
REGULATORY LANDSCAPE AND ITS
APPLICABILITY TO AIDR
The AI regulatory landscape is extensive, dynamic and fragmented. 14 We
focus here on approaches taken by the European Union (EU), United
Kingdom (UK) and United States (US), but many other jurisdictions
are also active in this area.15 By encouraging the responsible use of
trustworthy technology, or that which is fair, safe and consistent
with human and civil rights, these approaches attempt to address
and mitigate many of the challenges and concerns associated with AI
previously discussed.
14 For a representative list of global Al regulatory initiatives from governments, international
organizations, and civil society, see OECD Policy Observatory.
15 For example, Singapore was the first Asian country to publish a Model Al Governance
Framework (Infocomm Media Development Authority 2019) and the first country to launch an
Al Governance Testing Framework and Toolkit ("Al Verify") (Infocomm Media Development
Authority 2022); Canada was the first country to directly regulate federal government use of Al
(Directive on Automated Decision-Making 2019);Japan was the first country to raise, as an official
policy matter, the need to create Al development and implementation standards (Iida 2021).
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 695
European Union: AI System Risk Classification and
Product Liability Laws
The EU Artificial Intelligence Act (AI Act), proposed in 2021 and pending
potential enactment, would make the EU the first large jurisdiction
to specifically regulate AI. The AI Act seeks to regulate systems that
pose a potential risk to fundamental rights or human wellbeing and
categorizes AI use cases along four risk tiers: minimal, limited, high
and unacceptable (European Commission 2022). System developers'
and users' documentation, disclosure and transparency obligations
correspond with the risk levels, ranging from voluntary to obligatory. The
Act considers the use of AI technologies in the administration of justice,
or "applying the law to a concrete set of facts", as a high-risk application
subject to the following mandatory requirements before systems can be
released on the market (European Commission 2022, 41, para 40):
High risk - Risk assessment and mitigation systems, high quality
datasets, activity logging to promote traceability, appropriate levels
of human oversight, and high levels of robustness, security, and
accuracy.
The EU's proposed amendments to its product liability laws (European
Commission 2022) will complement the AI Act by ensuring providers
and manufacturers of AI or AI-enabled systems that are defective, cause
physical injury, property damage, or data loss or privacy breach are liable
to compensate injured parties (European Commission 2022). These rules
apply broadly to both new and existing hardware and software products,
and manufacturers will be responsible for harms resulting from changes
or software updates that they make to products already on the market
(European Commission 2022). Cited forms of compensable harm include
discrimination by AI recruitment software or the onset of a health
condition caused by an innovative medical device.
AIDR Systems and Automated Decision-Making
Affirming that the EU considers AI in ADR high risk, in 2018, the
European Commission for the Efficiency of Justice (CEPEJ) adopted five
ethical principles for the use of AI in judicial systems, including ODR:
(1) respect for fundamental rights; (2) non-discrimination; (3) quality
and security; (4) transparency, impartiality and fairness; and (5) "under
user control" (CEPEJ 2018). While the CEPEJ acknowledged that AIDR
could significantly improve access to justice (2018, 44), it believes users
Spring 2023
696 4miuis Curiae
and deployers should assess the appropriateness1 6 and degree of AI's
integration in the dispute resolution process to ensure that transparency,
neutrality and loyalty requirements are being upheld (CEPEJ 2018).
To this end, the CEPEJ asserts that technology applications must not
undermine the following rights guaranteed in all civil, commercial and
administrative proceedings: access to a court; adversarial principle;"
equality of arms; impartiality and independence of judges; and right to
counsel (2018).
With respect to automated ODR systems, the CEPEJ references
section 22 of Europe's data protection law, the General Data Protection
Regulation (GDPR), which allows persons "to refuse to be the subject
of a decision based exclusively on automated processing" when the
automated decision is not required by law and entitles them to decisions
made by human decision-makers (2018). Beyond the right to object,
both the EU GDPR and UK Data Protection Act 2018 also confer on
data subjects the rights to be informed about the existence and use
of automated decision systems and to access meaningful information
about the systems' underlying logic and potential consequences (UK
Parliament 2018). Data subjects who explicitly consent to decisions based
solely on automated processing possess a right to obtain an explanation
of the system's decision (UK Parliament 2018). According to the UK
Information Commissioner's Office, explainability statements containing
the following explanations must accompany automated decision systems
released for use, namely: rationale, responsibility, data, fairness, safety
and performance, and impact (2020). These statements help address
concerns around black-box systems and provide disputants with the
greater ability to challenge an automated decision with legal effect.
The EU GDPR and UK Data Protection Act protect the personal
information of all citizens and residents regardless of whether they
are physically present in those territories (GDPR 2018, article 3). This
means that organizations operating outside the territories but processing
the information of EU or UK citizens and residents, monitoring their
behaviour or offering them goods and services, nonetheless, must comply
with the GDPR. Individuals protected under these laws could foreseeably
opt out or require explainability statements of automated decisions
that are part of AIDR processes outside of Western Europe. Even if an
automative AI system is not legally required to adhere to GDPR because,
16 The use of Al in a low-value e-commerce dispute poses less risk of serious harm than its use in
divorce proceedings or allocation of health care resources, for instance (CEPEJ 2018, 44).
17 "An adversary system resolves disputes by presenting conflicting views of fact and law to an
impartial and relatively passive arbiter, who decides which side wins what" (Freedman 1998, 1).
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 697
for example, it only processes the data of US citizens residing in the US,
GDPR has become a de facto standard for international organizations
because of the significant technical complexity and costs of having
systems operate in compliance with (sometimes conflicting) rules in
different jurisdictions. To lessen this burden and enable systems to be
used across jurisdictions, it is preferable for AIDR systems to abide by
a single data protection standard.
The ability to opt-out of ADR processes that use automative technologies
and request a dispute be overseen by a human neutral is a governance
mechanism also being considered in the US. In October 2022, the
White House's Office of Science and Technology Policy (OSTP) released
its "Blueprint for an AI Bill of Rights". Focused on high-risk automated
technology systems, the Blueprint advances 18 five key principles 19 that
mirror or expand on those found in many other AI governance frameworks
(White House OSTP 2022). It identifies judicial and ADR processes as
requiring more stringent safeguards and protections, which might
include (a) the ability to opt-out of ADR processes involving automated
technologies; (b) access to an explanation of how the system operates and
why it arrived at its resolution, so parties can challenge or appeal the
decision; and (c) comprehensive privacy-preserving security measures
for systems that use, process or extract sensitive data about individuals
(White House OSTP 2022). Some US state privacy laws, including those
in California (2018), Colorado (2021), Virginia (2023) and Connecticut
(2022), now codify residents' rights to opt-out of automated decision-
making technologies in certain contexts and to receive meaningful
information about AI decision logic. Therefore, like the EU and UK, the
US is also emphasizing that, in high-risk areas, the logic and intent
underlying AI system outputs should be understandable to consumers.
Non-Regulatory AIDR Governance
AI governance is not purely a matter of regulatory compliance; a wide
range of non-binding best practices and standards also exist. The
ABA, for instance, notes it is critical to incorporate a broad range of
ADR practitioners and stakeholders' input into ODR system design
and development (ABA (Dispute Resolution ODR Task Force) 2022). In
the absence of close collaboration between system developers and an
18 "The Blueprint for an Al Bill of Rights is non-binding and does not constitute U.S. government
policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation,
policy, or international instrument" (White House OSTP 2022, 2).
19 Safe and effective systems; algorithmic discrimination protections; data privacy; notice and
explanation; and alternative options.
Spring 2023
698 qi62icus Curiae
implementing organization, the former will have discretion in determining
the model's design, training data and underlying logic, thereby influencing
the system's outputs. If collaboration in the design phase is not possible,
organizations procuring systems from external developers should take
steps to assess and mitigate any gaps between the developer's and the
user's needs, such as by articulating clear values, objectives and key
performance indicators for systems, and performing impact assessments
before and continuously after implementation (National Center for State
Courts nd; ABA 2022). Offering guidance for designers, developers,
providers, practitioners and users, the ABA lists the following criteria
among those it uses to describe accountable, secure, equal, and
transparent ODR systems (2022):
K uses data security technologies and practices that meet industry
standards for information technology;
K indicates whether they comply with relevant governmental and
non-governmental guidelines on transparency and fairness of AI
systems;
0 includes metrics used to assess system performance, including
the accuracy of those metrics;
0 regularly audited for compliance and to evaluate whether the
system is meeting articulated goals;
0 provides at least the same confidentiality and privacy as does
offline dispute resolution;
0 does not provide any user with a systemic advantage.
The ABA maintains that these provisions supplement "applicable
technical standards or the legal and ethical principles that apply in
face-to-face dispute resolution processes", such as due process and self-
determination (2022, 2).
In 2019, the UK became the first jurisdiction to pilot public sector
AI procurement guidelines (World Economic Forum 2019), seeking to
encourage the adoption and use of responsible AI by the public sector and,
by extension, private businesses designing AI systems for government use.
Given that ADR processes deal with sensitive personal information and
decisions need to be explainable, the following procurement principles are
especially relevant for AIDR: enabling algorithms' internal and external
interpretability to establish accountability and contestability; appropriate
confidentiality, trade-secret protection and data-privacy practices; and
clearly defined data-sharing agreements with vendors (World Economic
Forum 2019).
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 699
The procurement of robust and secure AI systems is likewise
encouraged in the US. In January 2023, the National Institute of
Standards and Technology (NIST) published the first official draft of its
AI Risk Management Framework (RMF), a voluntary framework intended
to encourage the development, deployment and use of responsible and
trustworthy technology. 20 Relevant for the entire spectrum of AIDR
systems, the RMF notes that human baseline metrics must be established
for AI applications that augment or replace human activity (NIST 2023).
It also recommends that organizations using external developer software,
hardware and data ensure that their risk tolerances align with those of
the developer, so as not to introduce any unanticipated risks (NIST 2023).
[E] HOW Al RULES WILL BECOME ADR RULES
AI and ADR are both regulated through rules that apply to more general
areas, such as privacy and advertising practices (Atleson 2023). Likewise,
ADR rules, such as requirements for conflicts disclosures, apply to AI used
in ADR. So too will the emerging body of AI rules apply to ADR. AI is already
part of ODR and many ADR processes, whether it is doing something
relatively simple on the assistive end of the spectrum like enabling video
conferencing and scheduling, or something closer to the automative end.
Recent advances in AI combined with the Covid-19 pandemic accelerated
the adoption of AIDR, but AIDR adoption will continue to increase as
AI capabilities continue to improve. Even very traditional ADR systems
will face competitive pressures to incorporate AI. Just as traditional legal
practitioners will face increased competition from legal practitioners
augmented by AI, or in some cases using automated systems, traditional
ADR providers and processes will face increased competition from AIDR
systems. At some point, it is likely that all ADR will be AIDR. As this
transition accelerates, AI rules will increasingly apply to ADR.
For instance, the European Parliament has suggested that deployers
of AI systems are in control of risks and have corresponding liability for
AI-generated harms (Committee on Legal Affairs 2020). ADR practitioners
may thus be liable for harms caused by AI systems they adopt in ways they
would not be liable for similar harms they cause directly. For example, an
ADR provider may have liability for using an AI system that is ultimately
proven to have a systemic racial bias, as has been alleged against systems
used by some judges to make bail determinations (Larson & Ors 2016).
While human neutrals can have liability for racially motivated behaviour,
20 Valid and reliable; safe, secure and resilient; accountable and transparent; explainable and
interpretable; privacy-enhanced; and fair with harmful bias managed.
Spring 2023
700 .icus Curiae
a neutral cannot be interrogated about biases in the same manner as an
AI system. A human neutral is exceptionally unlikely to admit to racial
bias, or may have an unconscious bias, but either way is likely to justify
an award in a reasoned decision based on permissible criteria. Even if it
is possible to detect a potential bias through aggregating and analysing
enough of a neutral's publicly available arbitration awards, or a judge's
for that matter, such a finding is unlikely to be adequate grounds for
challenging a particular award's validity. In the case of a biased human
neutral, all of whose awards rule against disputants of a particular
race, it will thus be very difficult to prove that such an outcome is not
coincidental. By contrast, some AI systems can be evaluated directly to
prove the existence of bias if such a statistical finding emerges. AIDR
systems revealed to be operating with errors or unfair biases will then
need to be reprogrammed or decommissioned, providing another ADR
accountability mechanism. Human neutrals on the other hand are very
rarely disciplined or held accountable for errors or unfair biases (Silver
1996; Dore 2006). Similar liability considerations may apply under
product liability rules for AIDR systems, such that some harms caused
by AI systems in the ADR context would not entail liability had they been
caused by a person. One effect of this enhanced liability may be greater
attention to system designs of AIDR processes.
Even non-binding regulations may have a similar effect. For instance,
while the UNCITRAL and ABA rules and guidelines affirm neutrals should
treat parties equally and fairly, neither claims they should not provide
users with a "systemic advantage" like that of the ABA ODR standards
(2022). Though not defined, systemic advantage in AIDR likely includes
technology-based advantages. Technology access and comfort shape the
dynamics of disputants in relation to each other and the neutral. Parties
using video-conferencing software may perform differently depending
on their backgrounds and environment, video quality and internet
connections. These technical factors can have small to large impacts on
the ADR process and ultimate resolution. For example, they can play
a role in advocates' abilities to present their arguments and neutrals'
perceptions of parties' professionalism and reliability. As these standards
become part of ADR, it may result in heightened obligations for neutrals
to level the playing field.
[F] CONCLUDING THOUGHTS
Appropriate AI regulations should benefit ADR because these regulations
seek to achieve goals and values that exist in both fields, such as promoting
trustworthiness, fairness and diversity. To the extent that AI systems
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 701
will be held to higher standards than human neutrals, such as greater
explainability and transparency standards, AI rules may help solve some
of the long-felt needs in ADR governance.
About the authors
Ryan Abbott, MD, JD, MTOM, PhD, is Professor of Law and Health
Sciences at the University of Surrey School of Law, Adjunct Assistant
Professor of Medicine at the David Geffen School of Medicine at UCLA,
and a mediator and arbitratorwith JAMS, Inc. He is the author of The
Reasonable Robot: Artificial Intelligence and the Law published in 2020
by Cambridge University Press as well as the editor of the Research
Handbook on Intellectual Property and Artificial Intelligence published in
2022 by Edward Elgar. See also Ryan's website.
Email: rnabbott< surrey.ac.uk.
Brinson S. Elliott is an incoming JD Candidate and a Client Team
Leader at The Cantellus Group, a boutique advisoryfirm specializingin the
strategy, oversight and governance of Al and other frontier technologies.
Her research interests include the socio-technical, ethical and legal
implicationsof emerging technologies and theirgovernance. Brinson holds
a BAH in political studies and philosophy, with Distinction, from Queen's
University, Kingston.
Email: brin.el.liottZ cantellusgroup.com..
References
Abbott, Ryan. The Reasonable Robot Artificial Intelligence and the Law.
Cambridge University Press, 2020.
Alessa, Hibah. "The Role of Artificial Intelligence in Online Dispute
Resolution: A Brief and Critical Overview." Information
&
Communications Technology Law 31(3) (2022): 319-342.
Allen & Overy. 2023. "A&O Announces Exclusive Launch Partnership
with Harvey," 15 February 2023.
American Bar Association (Dispute Resolution ODR Task Force). 2022.
"American Bar Association Section of Dispute Resolution Guidance for
Online Dispute Resolution (ODR)".
Atleson, Michael. "Keep Your Al Claims in Check," Federal Trade
Commission, 2023.
Spring 2023
702 Amicus Curiae
Baryse, Dovile, & Roee Sarel. "Algorithms in the Court: Does It Matter
Which Part of the Judicial Decision-Making Is Automated?" Artificial
Intelligence Law (Dordrecht) 8 (2023): 1-30.
Bittle, Jake. 2020. "Lie Detectors Have Always Been Suspect: AI Has
Made the Problem Worse," TechCrunch 13 March 2020.
Carneiro, Davide, & Ors. "Online Dispute Resolution: An Artificial
Intelligence Perspective," Artificial Intelligence Review 41 (2014): 211-
240.
Committee on Legal Affairs. "Draft Report with Recommendations to
the Commission on a. Civil Liability Regime for Artificial Intelligence,"
European Parliament, 2020.
Condlin, Robert J. "Online Dispute Resolution: Stinky, Repugnant, or
Drab?" DigitalCommons@UM Carey Law (2017): 717-758.
Casetext. 2023. "Casetext Unveils Cocounsel, the Groundbreaking Al
Legal Assistant Powered by OpenAI Technology". PR Newswire, 1 March
2023.
de la Rosa, Fernando Esteban & John Zeleznikow. "Making Intelligent
Online Dispute Resolution Tools Available to Self-Represented Litigants
in the Public Justice System." Presented at ICAIL 21, Sao Paulo,
Association for Computing Machinery, 2021.
Dore, Laurie Kratky. "Public Courts Versus Private Justice: It's Time to
Let Some Sun Shine in on Alternative Dispute Resolution" Chicago-
Kent Law Review 81(2) (2006):463-520.
Ebner, Noam & John Zeleznikow. "No Sheriff in Town: Governance for
Online Dispute Resolution," Negotiation Journal 31(4) (2016): 297-323.
European Commission. "Directive of the European Parliament and of the
Council on Liability for Defective Products," 2022.
Freedman, Monroe H. "Our Constitutionalized Adversary System,"
Chapman Law Review 1(1) (1998): 57-90.
Garcia, Juan Manuel Padilla. "Rama Judicial De Colombia Juzgado 1"
Laboral Del Circuito Cartagena," [Judicial Branch of Colombia 1st
Labor Court of the Cartagena Circuit] Republica de Colombia Rama
Judicial Consejo Superior de la Judicatura, 2023.
Gordon, Cindy. "ChatGPT Is the Fastest Growing App in the History of
Web Applications," Forbes, 2 February 2023.
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 703
Hensler, Deborah R. "OurCourts, Ourselves: How the Alternative Dispute
Resolution Movement Is Re-Shaping Our Legal System," Dickinson Law
Review 122 (2017): 349-382.
Iida, Yoichi. 2021. "Japan Gives Minister's Award to Andrew Wyckoff for
His Achievements at the OECD," OECD Al Policy Observatory, 29 June
2021.
Infocomm Media Development Authority. 2019. "Singapore Releases
Asia's First Model Al Governance Framework," 23 January 2019.
Infocomm Media Development Authority. 2022. "Singapore Launches
World's First Al Testing Framework and Toolkit to Promote
Transparency; Invites Companies to Pilot and Contribute to
International Standards Development," 25 May 2022.
Information Commissioner's Office. 2020. "Explaining Decisions Made
with Artificial Intelligence."
Larson, Jeff & Ors. 2016. "How we Analyzed the COMPAS Recidivism
Algorithm," ProPublica, 23 May 2016.
Liyanage, Kananke Chinthaka. "The Regulation of Online Dispute
Resolution: Effectiveness of Online Consumer Protection Guidelines,"
Deakin Law Review 17(2) (2013): 251-282.
Lomas, Natasha. 2021. "'Orwellian' Al Lie Detector Project Challenged in
EU Court," TechCrunch, 5 February 2021.
McKendrick, Joe & Andy Thurai. 2022. "Al Isn't Ready to Make
Unsupervised Decisions," Harvard Business Review, 15 September 2022.
Menkel-Meadow, Carrie. "Ethics in Alternative Dispute Resolution:
New Issues, No Answers from the Adversary Conception of Lawyers,"
Responsibilities Symposium - The Lawyer's Duties and Responsibilities
in Dispute Resolution: South Texas Law Review 38 (1997): 407-455.
Menkel-Meadow, Carrie. 2013. "Regulation of Dispute Resolution in the
United States of America: From the Formal to the Informal to the 'Semi-
Formal'," Scholarship @ Georgetown Law 1-37.
Miller, Sterling. 2022. "The Problems and Benefits of Using Alternative
Dispute Resolution," Thomson Reuters, 29 April 2022.
National Center for State Courts (nd). "Request for Information (RFI) Toolkit
for Courts Procuring Online Dispute Resolution (ODR) Technology."
Spring 2023
704 Amicus Curiae
Orr, Dave & Colin Rule. (2019) "Artificial Intelligence and the Future of
Online Dispute Resolution." Presented at Artificial Intelligence and Its
Impact on the Future of ADR, Albany, New York State Bar Association.
Rajendra, Josephine Bhavani & Ambikai S. Thuraisingam. "The
Deployment of Artificial Intelligence in Alternative Dispute Resolution:
The AI Augmented Arbitrator," Information & Communications
Technology Law (2021): 176-193.
Raymond, Anjanette H. & Scott J. Shackelford. "Technology, Ethics,
and Access to Justice: Should an Algorithm Be Deciding Your Case,"
Michigan Journal of International Law 35 (2014): 485-524.
Roberts, Simon. "Alternative Dispute Resolution and Civil Justice: An
Unresolved Relationship," 56(3) Modern Law Review 56(3) (1993): 452-
470.
Rolph, Elizabeth & Ors. "Escaping the Courthouse: Private Alternative
Dispute Resolution in Los Angeles," Journal of Dispute Resolution 2
(1996): 277-324.
Rose, Janus. 2023. "A Judge Just Used ChatGPT to Make a Court
Decision." Vice, 3 February 2023.
Schmitz, Amy J. & Ors. "Researching Online Dispute Resolution to
Expand Access to Justice," Giustizia Consensuale [Consensual Justice]
(2022): 269-303.
Schoop, Mareike & Ors. "Negoisst: A Negotiation Support System for
Electronic Business-to-Business Negotiations in E-Commerce," Data
& Knowledge Engineering 47(3) (2003): 371-401.
Shuster, Anastasia & Ors. "Lie to My Face: An Electromyography Approach
to the Study of Deceptive Behavior," Brain and Behavior (2021) 1-12.
Silver, Carole. "Models of Quality for Third Parties in Alternative Dispute
Resolution," Articles by Maurer Faculty, Maurer School of Law: Indiana
University (1996): 37-94.
Tashea, Jason. 2017. "Artificial Intelligence Software Outperforms
L wyers (without Subject Matter Expertise) in Matchup," Legal
Technology, ABA Journal, 3 November 2017.
Taylor, Luke. 2023. "Colombian Judge Says He Used ChatGPT in Ruling,"
The Guardian, 2 February 2023.
Waterman, D.A. & Mark A. Peterson. "Models of Legal Decisionmaking,"
The Institute for Civil Justice, RAND Corporation, 1981: 1-74.
Vol 4, No 3 (2023)
Putting the Artificial Intelligence in Alternative Dispute Resolution 705
White House Office of Science and Technology Policy. "Blueprint for an
AI Bill of Rights-Making Automated Systems Work for the American
People," 2022.
Wing, Leah. "Mapping the Parameters of Online Dispute Resolution,."
International Journal of Online Dispute Resolution (9(1) (2022): 3-16.
Zeleznikow, John. "Using Artificial Intelligence to Provide Intelligent
Dispute Resolution Support," 30 Group Decision and Negotiation 30
(2021): 789-812.
Legislation, Regulations and Rules
American Bar Association. 2004. "The Code of Ethics for Arbitrators in
Commercial Disputes."
American Arbitration Association & Ors. 2005. "Model Standards of
Conduct for Mediators."
Artificial Intelligence Act 2021. "Proposal for a regulation of the European
Parliament and Council laying down harmonized rules on Artificial
Intelligence (Artificial Intelligence Act) and amending certain Union
legislative acts" EUR-Lex 52021PC0206.
California Consumer Privacy Act 2018. Section 3, Title 1.81. 5 of the
CCPA, added to Part 4 of Division 3 of the California Civil Code [3] §
1798.185(a)(1)-(2), (4), (7).
Connecticut State Senate. 2022. "An Act Concerning Personal Data
Privacy and Online Monitoring" SENATE BILL 6.
European Commission. nd. "Regulatory Framework Proposal on Artificial
Intelligence."
European Commission. 2022. "Regulatory Framework Proposal on
Artificial Intelligence."
European Commission for the Efficiency of Justice (CEPEJ). 2018.
"European Ethical Charter on the Use of Artificial Intelligence in
Judicial Systems and Their Environment" Council of Europe (2018):
1-79.
General Assembly of the State of Colorado. 2021. "Colorado Privacy Act"
SENATE BILL 21-190.
General Assembly of the State of Virginia. 2023. "Virginia Consumer Data
Protection Act" § 59.1-575.
Spring 2023
706 Amicus C:utiae
General Data Protection Regulation (GDPR) 2018. Regulation (EU)
2016/679 of the European Parliament and of the Council of 27 April
2016 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing
Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L
119/1.
Government of Canada. Directive on Automated Decision-Making 2019.
National Institute of Standards and Technology (NIST). 2023. "Artificial
Intelligence Risk Management Framework (AI RMF 1.0)."
UK Data Protection Act 2018
United Nations Commission on International Trade Law. 2006. "Status:
UNCITRAL Model Law on International Commercial Arbitration (1985),
with Amendments as Adopted in 2006."
United Nations Commission on InternationalTrade Law. 2010. "UNCIT RAL
Arbitration Rules (as Revised in 2010)."
United Nations Commission on InternationalTrade Law. 2016. "UNCIT RAL
Technical Notes on Online Dispute Resolution."
United Nations Commission on InternationalTrade Law. 2021. "UNCIT RAL
Expedited Arbitration Rules."
United Nations Commission on InternationalTrade Law. 2021. "UNCIT RAL
Mediation Rules (2021)."
Vol 4, No 3 (2023)