The first guardians against unwanted consequences of the algorithmic revolution are legislators. The importance of regulatory initiatives by central authorities must not be overlooked. States have the primary responsibility to protect fundamental freedoms under the public international law obligations that they have undertaken by signing and ratifying human rights treaties. As is illustrated by the issue of algorithmic discrimination, a large-scale application of automated decisions, i.e. justice delivered by machines, does not amount to a fairer and more accessible judicial system.
The French legislator responded promptly to the issue of automated systems by enacting different statutory instruments, like the “Law of the modernisation of justice in the 21st century” (“Loi n° 2016-1547 du 18 novembre 2016 de modernisation de la justice du XXIe siècle, JORF n°0269 du 19 novembre 2016”) or the more recent “Law for a 2018-2022 plan and for judicial reform” (“Loi n° 2019-222 du 23 mars 2019 de programmation 2018-2022 et de réforme pour la justice, JORF n°0071 du 24 mars 2019”). This last act puts in place a formal prohibition of automated processing of personal data for providers offering online dispute resolution services or online arbitration platforms. Similarly, Article 47 of the “Loi Informatique et Libertés” (“Loi n° 78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés, JORF du 7 janvier 1978”) prohibits an automated treatment of personal data as the sole basis of an individual decision.
Looking at the Common Law, the General Data Protection Regulation (GDPR)1 is applicable in the UK even post-Brexit after the transposal of its provisions into the Data Protection Act 2018. The GDPR poses severe barriers to the use of European citizens’ personal data: its Article
22(1) prevents a person from being subjected to an entirely automated decision, if the decision “produces legal effects concerning him or her or similarly significantly affects him or her”.
Concerning internal measures that have been adopted in the UK against algorithmic discrimination, the Data Ethics Framework refers to the Equality Act 2010 and provides principles and guidance for using algorithms in the public sector. When new technological evolutions arise, it can be difficult to apply existing laws to new regulatory systems. Here, the principles of the English Common Law will have to be adapted to the algorithmic revolution through case law and judicial creativity. For this very reason, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government issue a “guidance that clarifies the Equality Act responsibilities of organisations using algorithmic decision-making”.2
On a lower regulator level, great attention should be given to the data that is put into algorithms. The process of ascribing input data to systems which are then used to assist humans in the act of deciding a case is particularly sensitive. This is primarily the duty of private actors, i.e. the companies and start-ups creating and/or selling legal tech tools, but also the entities providing fully or semi-automated legal services. AI developers have their own vision of the world, their own values, which are impactful on the design of their algorithms.
There must be a constant awareness of this intrinsic bias, that often goes unnoticed. It must be detected and possibly eliminated. In the UK, the CDEI emphasised the importance of transparency and “responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them”. Similarly in France, the founding principles for developing algorithms and AI should be fairness and continued attention and vigilance, according to the CNIL. The quality of the data that is processed by the AI is paramount in avoiding errors: this entails regulation of the collection of this data, with particular focus on the “human problem stemming from the interest [that] some stakeholders” might have in providing incorrect data.3
In terms of legal and arbitral proceedings, in orders to avoid putting a halt to judicial creativity altogether, these risks should be notified to the parties involved, and sufficient updates should be put in place for the tools used by law firms to analyse jurisprudence.