19 Feb 2020
Hosted by Eversheds Sutherland, chaired by Paul Rose, C.Arb FCIArb, and moderated by Ben Giaretta C.Arb FCIArb, this event was the occasion of an excellence delving into the complexities posed by new technology, such as artificial intelligence (AI). The panel was first-class, comprising of Joanna Goodman, a noted journalist who writes on business and technology for such publications as the Guardian, The Times and the Law Society’s Gazette, Dr Paresh Kathrani, the Director of Education and Training of the Chartered Institute and scholar in artificial intelligence, ethics and the law, and whilst alphabetically last, but by no means least, Jonathan Leach, a Partner in Eversheds Sutherland’s Dispute Management practice group in London specialising in international arbitration.
A full write-up of the evening’s fascinating proceedings has been provided in excellent form by Kim Franklin QC C.Arb FCIArb and can be found on the CIArb website here.
One of the matters that Ms Franklin rightly draws attention to in her article is that - “As prediction could only be based on the past performance, an arbitration algorithm would need data feed from live arbitrations.”
I drew attention to this by way of a question from the floor because it is fundamental to the growth of AI in arbitration practice for it to learn of the likely outcomes in given fact situations from a given panel of arbitrators under a particular rule set.
Other than DisputesEfiling.com (DEF), there is no working cloud-based platform capable of generating the required data. As such, DEF has a unique perspective on the various issues that were raised by the panel’s moderator, Ben Giaretta. Ben’s questions for the panel addressed the key issues of data and international arbitration and are reproduced in bold font below. DEF’s replies follow. Our exchanges are reproduced to encourage debate:
[Ben’s] first question was, do parties really want AI in arbitration? There will be time and cost savings, of course - but at the price of a reduction of human involvement. Is this what parties want? (The answer, of course, may depend on what type of AI - which [Ben] deliberately left vague here.)
AI is already at work in all forms of dispute resolution where parties always want less cost and less time spent on what is typically a grudge spend. One question where AI applies in international arbitration is on the assessment of the weight and credibility of the evidence. Large amounts of quality data are crucial not only as an input during the operation of AI systems, but also to train them in the first place on evidence. Without access to lots of high quality data, algorithms cannot learn. Even the most powerful AI techniques – with top of the line hardware - are significantly less useful without access to quality data.
Assuming that dispute resolution is not moved entirely to an AI system, how will the inequalities of the partial adoption of AI amongst arbitrators be addressed?
The form of dispute resolution in question at the event was international arbitration and, here, the amount of data available is limited due to the usually confidential nature of proceedings. For some time to come, AI will remain limited and mainly an evidence/analytical tool deployed by the parties.
Equality of arms will remain an issue that parties will have to grapple with for a long time and the funding made available by each party and the willingness of the lawyers to embrace technology will influence this.
Will the use of AI by one party alone make the arbitration process unfair?
No more unfair than it is at present when one party has more able lawyers with better organised and resourced teams. AI is simply another tool in the box.
How will a human arbitrator deal with an expert report that has been produced by AI?
Training is key. Modern administered schemes are moving to secure platforms for the administration of their cases by, for example, out-sourcing to DEF (e.g. CPR) or building their own platform as SCC did recently. In the case of the CPR, they have participated in developing the Cyber-Security Protocol for International Arbitration. CPR is now building out from that achievement by providing training in cyber-security for their arbitrator panel.
Although CIArb is not an administered-scheme, there is an important role for CIArb in training its members about cyber-security and AI, which are the two big LegalTech issues facing modern arbitrators. Perhaps this event will prove the catalyst for a conversation around developing such provision.
How will a human tribunal hear and decide a challenge to the determination of an AI arbitrator? (Again I’m leaving the type of AI deliberately vague here.)
There is insufficient data available to enable AI systems to completely replace human judges. Coupled with the range and complexity of the fact/law matrix of most (if not all) international arbitrations, it is extremely unlikely human arbitrators will be replaced in the near future. However, the processing power of computers is increasing at a tremendous rate and AI may soon be capable of replacing human arbitrators in some matters in the medium term. The biggest obstacle remains the lack of data in enough volume to train the algorithms.
A platform such as DEF has a data feed and is capable of generating the volumes of anonymised data required that may well support AI enhanced analysis of arbitral outcomes. Armed with that, more cases may settle and more work will arise as arbitration becomes a process with greater predictability.
What would be the role of the AI providers in the arbitration world? (Again [Ben left] the type of AI deliberately vague here.)
To provide greater insight sooner about the likely outcomes from fact/law scenarios, enabling advisers to recommend more reliable settlement strategies, leading to lower costs and therefore attracting more work to the firms able to grasp the opportunity that modern technology affords.
Naturally they will be looking to generate revenue. What impact will their marketing efforts have on the practice of arbitration?
Competitive advantage can only be gained by access to large volumes of good quality data for algorithms to solve a known set of questions, the answers to which have value in the market place.
What obligations will AI providers have to assume in arbitration (on top of the obligations in the GDPR)?
The Cyber-Security Protocol for International Arbitrations makes a number of recommendations which are relevant. These supplement the arbitrators’ duty to maintain the confidentiality of (most) arbitrations. Arbitrators have that duty, not the parties, and if arbitrators are going to rely on the parties bringing technology forward then those arbitrators need training to understand what is being offered and the tech that lies behind.
And what will arbitration look like if one or two technology companies emerge to become the dominant providers, like tech companies in some other sectors?
Arbitration will look like the e-Disclosure market. However, without access to large volumes of data, that market in AI will not grow as quickly as it can.
The big data industry talks about the 5 Vs: volume, velocity, variety, veracity and value. Will AI decision-making in arbitration based on data be feasible, given the relative lack of available information about arbitration cases?
This is the nub of the issue. There is simply not enough quality data available at present to enable AI for international arbitration to develop. Thus, it will mainly remain confined to a tool for the parties’ advisers to use in relation to document assessment etc.
The platforms used by individual arbitral institutions also inhibit the growth of AI as there is not enough volume in each institution to provide the data for valuable analysis by an algorithm. Furthermore those individual platforms most likely have not been built with a data feed in them.
A platform like DEF can aggregate data from across the world though and that is probably the best hope for the growth of AI to support decision making about settlement strategies and tactics.
On the other hand, could large amounts of data in different areas (e.g. about contracts generally, or about decisions in national court systems) be used in combination with pre-determined rules, in order to achieve a viable system?
Quite possibly but the LegalTech developed for most jurisdictions does not have a data feed. For example, the IT Modernisation programme being run by Her Majesty’s Courts and Tribunal service in England and Wales is struggling to provide any kind of data feed as none was contemplated let alone built when work began in 2016. The generic document sharing platforms such as Microsoft Office 365 and Egress, for example, cannot produce data either. DEF launched in 2015 and is the only such Platform for managing ADR with a data feed capability.
And would the community of arbitration users see sufficient value in the pooling of data, so as to improve such systems in the future?
The scarcity and reliability of such data in enough volume would make that data a highly prized if not unique resource not just for practitioners but also academics and arbitration centres. For arbitrators in particular such data analysis would enable arbitrators to sharpen their offering to the market enabling those taking advantage to attract more work.
TONY N GUISE
Tony is the Director of DisputesEfiling.com Limited, the provider of online ADR platforms and a Past President of the London Solicitors Litigation Association.