Artificial intelligence and private international law: an exotic cocktail pepped up with innovative research questions

English Artificial intelligence and private international law: an exotic cocktail...
- Advertisment -

This post is authored by Alexia Pato, Postdoc Research Fellow at the University of McGill (Montreal, Canada)

Introductory remarks: the omnipresence of artificial intelligence

The Council of Europe defines artificial intelligence (AI) as a set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Along the lines of that definition, many of us conceive AI as humanoid robots (perhaps influenced by excellent movies, such as “Ex Machina”, staring Alicia Vikander). AI’s coverage is however much broader, to the extent that it reaches every aspect of our daily lives. For instance, AI technology is involved when we consult Siri (Apple’s assistant) or when Amazon generates a personalised list of items that we will likely buy. AI also helps companies to streamline and automate their recruitment process and assists doctors in diagnosing pathologies.

Private international law is not immune to the rise of AI, and the recently launched EU initiatives on that topic create a new playground for legal scholars. This post highlights some of the many research questions that the interaction of AI with private international law generates.

The 2021 Proposal for a Regulation

On 21 April 2021, the EU Commission published a much-awaited Proposal for a Regulation laying down harmonised rules on artificial intelligence, following explicit requests from the Council and the Parliament (see, in particular, the AI-related resolutions of the Parliament of October 2020 on ethics, civil liability and intellectual property). If adopted, the proposed AI Regulation would create a horizontal regulatory framework for the development, placement on the market and use of AI-systems in the Union, depending on the risk (either unacceptable, high or low/minimal) that said systems generate for people’s health, safety and fundamental rights (for a general assessment of the Proposal, see the CEPS Think Tank with Lucilla Sioli (DG CONNECT) available here, as well as the Ars Boni podcast available here).

As Article 2 of the Proposal would allocate an extraterritorial reach to the Regulation, private international law questions emerge. In particular, the Regulation is meant to apply to (1) providers placing AI-systems on the EU market or putting them into service there, irrespective of their place of establishment; (2) users located in the EU – a connecting factor which departs from the notion of domicile or residence; (3) providers and users located in a third state, when the output of the AI-system is used – but not marketed – in the EU. Remarkably, Article 2 bypasses the traditional choice of law methodology and unilaterally delineates the Regulation’s territorial scope of application. This legislative technique has been used on other occasions: the most recent example is perhaps Article 3 of the General Data Protection Regulation (GDPR). Just as under the GDPR, interpretative issues regarding the exact extraterritorial scope of the Regulation are likely to arise. I have myself worked on that topic in relation to data protection and I believe that extraterritoriality is a promising field of research (See A. Pato, “Extraterritoriality and Data Protection: The Feasibility and Promise of Legal Harmonisation” in H. L. Buxbaum and T. Fleury Graff (eds), Extraterritoriality (Brill/Nijhoff, Centre for Studies and Research in International Law and International Relations Series), forthcoming).

The law applicable to civil liability

Does private international law sufficiently accommodate disputes involving AI-systems? The question has gained momentum, as autonomous – i.e. driverless – vehicles start entering the EU market. One of the features that characterises cross-border traffic accidents involving self-driving cars is the plurality of potential defendants. Among them the driver (whose negligence plays an increasingly minor role as the activity of driving is almost entirely left to technologies), the manufacturer of the vehicle, the designer of the software, etc. As a result, several private international law systems applying different connecting factors might come into play, namely the Rome II Regulation, the 1971 Hague Convention on the Law Applicable to Traffic Accidents and the 1973 Hague Convention on the Law Applicable to Products Liability. Considering that national civil liability regimes vary (sometimes significantly) from one state to another, the outcome of a case might be different depending on the court seized (For a thorough private international law analysis, see T. Graziano, “Cross-Border Traffic Accidents in the EU – The Potential Impact of Driverless Cars” (Study for the JURI Committee, 2016), available here).

As one can see, AI exacerbates existing private international law problems: the multiplication of potentially liable persons makes litigation more complex and might increase the number of applicable laws. It is however not certain that accidents caused by AI-systems deserve the creation of different connecting factors, as they do not seem to significantly depart from traditional harmful situations involving human negligence. More research on that topic is however desirable for that conclusion to be final.

In October 2020, the EU Parliament released detailed recommendations for drawing up a Regulation on liability for the operation of AI-systems. According to the Parliament’s Draft Proposal, operators of high-risk AI-systems would be subject to a strict liability regime, while the liability of other AI-systems’ operators would be fault-based. If followed by the Commission and adopted, the text would partially harmonise national laws on civil liability in the EU. The traditional choice of law rules would however not be completely bypassed, as they would still be needed to designate the law applicable to questions falling out of the Regulation’s scope (such as the law applicable to multiple liability where non-operators are involved). Additional research to grasp the interaction of the future Regulation with the choice of law rules of, e.g., the Rome II Regulation would be welcome (For an analysis of the Draft Proposal from a private international law perspective, see J. von Hein, “Liability for Artificial Intelligence in Private International Law” (online presentation, 25 June 2020), available here).

Can artificial intelligence assist private international law?

The potential of AI to improve our lives in different sectors is undeniable. Private international law is no exception and this post contends that technology could reduce the complexity of its application. First, AI may help identify situations of normative hyper-regulation. Thus, market players would be able to assess, in advance, whether their conduct or a particular action must comply with several (sometimes contradictory) laws. Those situations are likely to increase in the event that laws apply extraterritorially (this is typically the case for data protection and competition law). A fertile ground for legal uncertainty and high costs is created as a result, and AI might be an interesting way to tackle those issues. Second, algorithms could assist legal professionals in carrying out tricky private international law exercises, such as identifying which courts have jurisdiction in contractual matters in absence of choice or applying the doctrine of renvoi. The creation of AI-driven tools will require an interdisciplinary joint-venture between private international law and computer sciences experts (On the possibility to computerise private international law, see P. M. Dung and G. Sartor, “The modular logic of private international law” (2011) 19 Artificial Intelligence and Law, issue 2-3, p. 233-261, available here and D. J. B. Svantesson, “Vision for the Future of Private International Law and the Internet – Can Artificial Intelligence Succeed Where Humans Have Failed?” (30 September 2019) Harvard International Law Journal, available here).

Final remarks: let’s the research start!

The EU legislator paved the way for the creation of a new regulatory framework on artificial intelligence. Its interaction with the private international law field creates a new, fascinating playground for researchers of different disciplines. I hope that the readers’ interest has been sparked and that stimulating research projects will pop up as result of this post.

Lvcentinvs
Desde la Asociación de Antiguos Alumnos del Magister Lvcentinvs y la Universidad de Alicante (España), información actualizada sobre los avances en materia de propiedad intelectual y de Derecho internacional privado para juristas europeos e iberoamericanos.

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

Time limit is exhausted. Please reload CAPTCHA.

Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.

LO ÚLTIMO

Must read

Puesta al día: P2P, inteligencia artificial, trolls, aperturismo y mucho arte

No podíamos dejar terminar la semana sin cumplir puntualmente...

El uso efectivo de la marca registrada como carga del titular

¿Qué modalidad normativa constituye aquella que impone al titular...
- Advertisement -

Quizá también te gusteRELACIONADOS
Recomendados para ti