Digitalisation, on the one hand, and automation, on the other hand, can greatly help speed up trial proceedings: documents can be scanned by algorithms to verify their authenticity, or to make sure that every party has access to the same documentation. Furthermore, automated processes can reduce the amount of time spent by lawyers on document-review and document-drafting. This is already happening in the legal departments of some multinational companies. The job of legal professionals can be “sub-divided” into different components that would be cheaper because a portion of these tasks would become automated.1 Automation would overall decrease the cost of legal fees, thus opening legal representation to more people and improving access to justice.
Hence, algorithms could reduce the “justice gap”, defined as the difference between the civil legal needs of low-income individuals and the resources available to meet those needs.2 Automated trials and virtual arbitral proceedings, for instance in simple and “repetitive” disputes, usually in private law litigation, would provide the quicker handling of cases. In remote parts of the globe, such as rural areas, people might have to travel far from their homes to reach the nearest local court, and this can discourage them from filing suits or benefitting from the judicial system. Most disputes among neighbours could be resolved through automated mediation or other forms of alternative dispute resolution (ADR). As we have illustrated earlier (Part II), companies have already started commercialising these legal tech services to a wide audience, including firms and individuals.
Going a step further, algorithms could be used in simple case scenarios to make decisions in lieu of humans, especially where the outcomes of a case are “binary”. This would open the door for predictive justice and “machine learning”. This latter term refers to an algorithmic process that allows an AI programme to “learn” by itself thanks to the data processed by the tool.
Predictive justice does not reflect the current administration of justice in France nor in the UK, because the law forbids such processes. However, it might be possible in the near future to witness the development of “assistive” algorithms to be used in basic disputes. Legal tech-supporters emphasise that AI applied to decision-making would increase predictability and coherence in case law: with automation comes a sort of “harmonisation” of jurisprudence, which reduces legal uncertainty.
Another advantage of automated systems is the reduction of the courts’ workload. If judges do not spend too much time on straight-forward litigation processes, they can put more effort into complex disputes and focus their competence on difficult cases. The same goes for arbitrators, even though arbitral proceedings rarely involve simple scenarios. Anyhow, automated systems could improve the time-management of cases for both judges and arbitrators, as documents and parties’ arguments could be automatically reviewed and organised by AI-based programmes. Naturally, these advantages also apply to lawyers.
Within AI-driven decision-making, the argument of neutrality and impartiality is brought forward by many supporters of legal tech. An algorithm is a code, and if the code is correctly designed, then the outcome of a decision should be unbiased and objective. As a matter of fact, algorithms reflect political and/or societal choices.3 Thus, as will be shown in the next article (Part IV), the configuration stage of algorithm-building is the moment when crucial decisions around ethics take place. It is therefore necessary to be aware of the numerous risks of automation in order to avoid disastrous outcomes.