Abstract di Caterina Barberi

Legal tech, but more generally automated systems, can pose real threats to the current values of justice. The risks of careless and unverified automation are similar or even potentially more severe than those encountered in the traditionally slow, costly, and unsatisfactory (sometimes even unfair?) justice systems. In addition, algorithmic discrimination is a real challenge that is already occurring and must urgently be addressed and remedied. The law is adaptable: the legal system should accommodate to the upcoming innovations by multiplying positive outcomes through the cautious use of new technology.

Introduction

For my masters’ degree in Business Law at the Université Paris Nanterre, I have chosen to focus my research on the impacts of artificial intelligence (AI) and algorithmic systems on the law. The advent of “legal tech”, understood as the introduction of automated systems and AI into the legal sphere, has not been studied extensively by many lawyers and the sources available in the literature and in legal doctrine can at times be limited.

I looked at the regulatory frameworks around legal tech in France and in the United Kingdom (UK), the potential infringements on fundamental liberties that could occur because of the use of AI, and the role of private and public actors in the solutions that can be envisioned for an optimal use of these technologies. I also analysed the arguments in favour of a more-or-less-strict regulation of this field in order to ensure compliance with major legal principles.

Why France and the UK?

I undertook an International business law masters’ at King’s College London, so I picked English and Welsh law as a common law basis to parallel with French law. Furthermore, the UK is a leading country in Europe in terms of technological innovation and legal adaptability. Additionally, the role played by France and the UK in the global debate around ethics and technology is important: France has been a leading country in the European Union in terms of

reflections around the ethics of AI. The French Data Protection Authority (CNIL, Commission Nationale de l’Informatique et des Libertés) and the Defender of Rights (Défenseur des Droits) have been very active in highlighting important issues around automated systems. The UK has also witnessed a growing debate on the regulation of AI, particularly around the theme of transparency. The independent advisory body Centre for Data Ethics and Innovation (CDEI) is dedicated to connecting the various stakeholders in order to make policy recommendations around data-driven technologies.

What is the current landscape of legal tech?

There are both advantages and drawbacks to automated systems, and we can pinpoint potential solutions and evolutions that can ensure a fair use of algorithms in the law. What tensions arise when we introduce AI into the law? How far can these automated programmes go in substituting human decision-making? Outside the judicial system, automated decisions are already a reality, as is algorithmic discrimination: how are the French and English policymakers facing this problem? What can the law take from algorithms in order to achieve more efficiency, more impartiality, and more transparency? Can AI-driven applications increase the potential for a greater and truly “universal” access to justice?

Some programmes that already exist can simplify lawyers’ tasks by analysing enormous amounts of files and data, for instance to allow speedy document revisions, or by drafting standardised contractual clauses. Some algorithms can draft contracts based on simple information such as the name and address of the parties, the industry sector, and the scope of the agreement. The multifaceted nature of automation goes to show just how impactful it can be: legal professionals could save time, energy, and money.

The sphere of technology is traditionally very self-regulated: companies and tech start-ups create the “products” that are then used by the public. The underlying processes used to create computer programmes and software applications are hardly ever known and/or disclosed. Thus, a multi-stakeholder and multi-party approach might be desirable when it comes to legal tech’s regulation. Transparency requirements should surround the design of algorithms and AI, as the programmers’ work needs to be supervised in some way.

Spreading this pool of knowledge amongst legal professionals is paramount: legal tech should not be ignored by lawyers, judges, arbitrators, professors, and it should become part of the curriculum in law schools.