What are the benefits and tensions that automation can bring into the legal sphere, and how are the British and French legal systems managing these challenges while accounting for the rapid evolution of technology without encroaching on their fundamental legal principles?

The term “artificial intelligence” (AI) was coined in 1979 by computer scientist John McCarthy1 to indicate machine-originating intelligence. It is nowadays understood as a term encompassing computer programmes or systems that can imitate (human) cognitive functions. An algorithm is the description of a finite and unambiguous sequence of steps or instructions for producing results (output) from initial data (input). Today, algorithms are often coded onto programmes that are run in the form of software or applications.

“Legal tech” exists when automated systems are introduced in the law. It involves the use of software applications and AI-powered tools that touch upon the legal sphere, in a more-or-less impactful way. A so-called “robotised” justice where the role of judges and lawyers would be completely unnecessary, as envisioned in certain dreadful scenarios, is highly undesirable – and unlikely.

With the help of such instruments, cases could take less time to prepare, and thus, an overall speedier justice could be achieved. This is also true for alternative dispute resolution (ADR) mechanisms, like arbitration and mediation, that could easily be delivered through virtual platforms in order to avoid overly bureaucratic and costly trials and to decrease the number of cases that tribunals must address. In a complex world where litigation often involves

transnational issues and cross-border elements, the benefits of achieving more justice with fewer efforts can highlight a very optimistic side of legal tech. Furthermore, algorithm-based decision-making or “predictive justice” is supposedly impartial. It involves the prediction of the outcome of a given case by a judge through the use of IT tools. Going even further, predictive justice could potentially work without any human intervention. If a programme built on well-coded algorithms is able to analyse all the relevant documents and applicable laws in a given dispute, it is hard to argue that its solution could be faulty.

On the other hand, the technological sector is largely unregulated, especially in the fields of AI and algorithm-based legal tools. Private actors – mainly tech corporations and start-ups – are at the forefront of these innovations. Legislators find themselves stuck between the need to structure the legal tech framework through (some) formal rules and the fast and inevitable evolution that is constantly putting technology two steps ahead of society – and it is hard for States to keep up.

There is a critical factor that pushes towards the liberalisation of this industry: by essence, the virtual sphere – or “cyberspace” – is delocalised and universally available, therefore it seems redundant to apply laws to something whose scope does not possess any borders. The principle of “technology neutrality”, widely popular among IT scholars who are regulation-adverse, is used to argue that introducing laws might hinder the development of superior technology, and that such laws might become rapidly obsolete.2

Despite these reluctances, the French and British regulators have imposed some rules. Such regulatory standards are there to protect individual data and the way such data are shared and used. Some instruments of soft law are also trying to frame the construction of algorithms so that automation is based on authentic and unbiased data. This is a crucial issue because programmes based on statistics can easily lead to arbitrary and unfair decisions, as will be developed in a subsequent part of this series of articles.

In 2017, the French Data Protection Authority (CNIL, “Commission Nationale de l’Informatique et des Libertés”) organised a public debate around the ethics raised by AI,3 after the so-called “Digital Republic Bill” (“Loi n° 2016-1321 du 7 octobre 2016 pour une République numérique, JORF n°0235 du 8 octobre 2016”) came into force. This law entrusted

the CNIL with the task of reviewing ethical issues and societal questions raised as a result of the development of digital technology. This report highlights, among other issues, the worrying consequences of automated decisions, which could eventually undermine the traditional figures of legal authority.

If we cross the Channel to look at the Common Law, which is knowingly very adaptive, we will see that the English legal system is often praised for its flexibility and innovative developments. Evolutions brought forward by new phenomena do not require legislative intervention, but rather an adaptation by analogy of existing principles already present in the law. “Time and again over the years the Common Law has accommodated technological and business innovations”, as was affirmed in a 2019 legal statement published by the UK Jurisdiction Taskforce of the Lawtech Delivery Panel.

In this series of article, I will first provide examples of automated tools applied in the legal sphere, to illustrate what use can be made of AI in the law (Part II). Successively, I shall dive deeper into the benefits of implementing automation in different sectors, especially in terms of an improved access to justice (Part III). I will then tackle algorithmic discrimination and its related dangers for a democratic society based on the rule of law (Part IV). Lastly, I will try to observe how the French and British systems are embracing the algorithmic revolution and argue in favour of a safe and coherent integration of automation into our institutionalised systems (Part V).


1 John McCarthy, ‘Ascribing mental qualities to machines’ in Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence, Humanities Press 1979

2 Renato Mangano, ‘Blockchain Securities, Insolvency Law and the Sandbox Approach’, European Business Organization Law Review (2018) Vol. 19, page 723 3 CNIL, ‘How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence’, December 2017, page 5