This post has been drafted by Sergi Gálvez Duran.
With the GDPR regarded as the “gold standard” for data protection, the EU aims to become the global regulator of AI. However, the EU’s values and interests on AI differ from those of other countries, such as China or the United States. In this article, I discuss some jurisdictional implications of this strategy in the context of the current EU Proposal for an AI Act (“AIA”).
Artificial Intelligence extra-territorial scope
In order to enhance the EU’s role as a global AI regulator, the AIA has extra-territorial scope. According to its Article 2, the AIA would apply to a broad range of operators, including:
- Providers placing on the market or putting into service AI systems in the EU, irrespective of the location of these provider;
- Users of AI systems located within the EU; and
- Both providers and users of AI systems located outside the EU if the output produced by the AI system is used in the EU.
In other words, what really matters is whether an AI system or service –for example, an insurance credit score solution based on machine learning– has an impact on EU citizens, not where the company that provides it or uses it is located. Therefore, the AIA would apply to significant global technology companies seeking to operate AI within the EU, such as Amazon, Google or Salesforce.
What is the rationale behind a cross-border AI regulation?
From a general perspective, I believe the EU wishes to achieve a global standard in the world of artificial intelligence and, as some commentators have already pointed out, it is likely to expect a “Brussels effect” similar to that caused by the GDPR (Anu Bradford). In fact, the European Commission has expressly recognized that a cross-border AI regulation guarantees the Union’s digital sovereignty: “there is a growing risk that the ‘digital sovereignty’ of the Union and the Member States might be threatened since such AI-driven products and services from foreign companies might not completely comply with Union values and/or legislation or they might even pose security risks and make the European infrastructure more vulnerable”.
Some commentators have already convincingly argued for the many advantages of this regulatory approach (Luciano Floridi). My goal here is to briefly point out some implications of this regulatory approach. My focus is on the AIA’s enforcement regime, which I believe may bring some jurisdictional conflicts.
Access to data and potential jurisdictional conflicts
Following the EU product law legislative technique, the AIA gives wide powers to market surveillance authorities to monitor and obtain relevant information from providers of AI systems. In particular, the AIA stipulates that market surveillance authorities would have the authority “to demand full access to the training, validation and testing datasets used by the provider, including through application programming interfaces (“API”) or other appropriate technical means and tools enabling remote access”. Also, where necessary to assess the conformity of the high-risk AI system with the requirements set out in the AIA, providers of AI are expected to grant them access to the source code of the AI system.
This obligation creates a set of questions. First, can market surveillance authorities order global access to providers’ data and algorithms under the AIA? How much does the location of AI systems matter? Under Article 2(1)(b), the AIA applies to “users of AI systems located within the EU”. The term “located” is not clear, as may be interpreted as the location of the AI system is what matters, instead of the user’s location. Such interpretation would make it increasingly difficult to determine the specific location of the AI system, considering AI’s inscrutability, supply chain structures and the increase in the use of cloud computing.
Likewise, under a literal reading of Article 2(1)(c), providers of AI systems located outside of the EU are subject to the AI Act if the output produced by the AI system is used in the EU. This means that it is not necessary placing on the market or putting into service AI systems in the EU, nor even having a branch or other establishment in the EU. However, it is clear that such cooperation by meaning of access to training datasets or the code behind algorithms may be necessary to file a claim of algorithmic discrimination or a damage caused to an EU citizen by a faulty AI system. Nonetheless, how can the cooperation of a non-EU company that has no establishment in the EU be ensured? Will US companies be willing to cooperate with EU authorities and provide access to its data and algorithms? In this regard, it is worth considering the clash between different national laws. Not only US and EU positions on trade secrets are different. Fourth Amendment provides restrictions on regulatory information collection and the US Privacy Act establishes limitations on an agency’s ability to disclose individuals’ records to other governmental agencies, except under enumerated exceptions. What is more, under the AIA the individual does not have any right to complain to a market surveillance authority or to sue a provider.
If the EU can impose access to US AI systems, can the United States also impose its controls on EU AI systems? Nowadays, United States regulation and policy on AI are different from those of the EU. The US National AI Initiative Act, which became law on 1 January 2021, does not provide for such monitor requirements, and the US is more likely to adopt an antitrust approach in the future.
Ultimately, some of the jurisdictional complexities pointed out here may find a response within the ongoing policy debate. The EU-US Trade and Technology Council (TTC) met for the first time in Pittsburgh on 29 September 2021 and it is expected they work on “compatible standards and regulations” for critical technologies, including AI.