Introduction
Following the emergence of different artificial intelligences such as the popular ChatGPT and after many months of debate, the European Parliament has approved the Artificial Intelligence Act, which establishes harmonized rules on Artificial Intelligence to guarantee safety and respect for fundamental rights while promoting innovation.
What is the New Law Seeking?
The objective of the Artificial Intelligence Act is to establish a uniform legal framework, particularly with regard to the development, marketing, and use of artificial intelligence in accordance with the values of the European Union (hereinafter, “EU”). In other words, to regulate the introduction into the market and the putting into service of certain AI systems, as the EU seeks to be the world leader in the development of safe, trustworthy, and ethical AI.
Scope of the Regulation:
It should be noted that the Artificial Intelligence Act will apply to:
- providers who place on the market or put into service AI systems or who place on the market general-purpose AI models in the Union, regardless of whether those providers are established or located in the Union or in a third country;
- deployers of AI systems who are established or located in the Union;
- providers and deployers of AI systems who are established or located in a third country, where the output generated by the AI system is used in the Union.
- importers and distributors of AI systems;
- manufacturers of products who place on the market or put into service an AI system together with their product and under their own name or trademark;
- the authorized representatives of providers who are not established in the Union;
- the affected persons who are located in the Union.
Among other assumptions that are developed throughout Article 2 of the Law, it will not apply to AI systems developed or used exclusively for military purposes; nor to public authorities of third countries or international organizations when such authorities or organizations use AI systems within the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.
Definition of “Artificial Intelligence (AI) System”:
For the first time, the concept of “Artificial intelligence system (AI system)” is legally defined: “a machine-based system that is designed to operate with varying levels of autonomy and that, after deployment, can demonstrate adaptiveness and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations, or decisions, that can influence physical or virtual environments”.
The articles also provide a series of definitions of the main participants in the AI value chain, such as providers, distributors, importers, or users of AI systems, among others.
Risk-based Approach:
Entering into the regulation of the matter, the Artificial Intelligence Act follows a risk-based approach. To do this, it distinguishes between AI systems that pose:
- Prohibited AI practices: Those that violate fundamental rights, such as manipulating people or vulnerable groups in a harmful way.
- High-risk AI systems: Allowed in the European market with mandatory requirements and ex-ante conformity assessment.
- Limited-risk AI systems: Less strict requirements.
- Low or minimal risk AI systems: No special regulations.
The prohibited AI practices will be those that, among others, violate fundamental rights. For example, those practices that have a great potential to manipulate people through subliminal techniques that transcend their consciousness or that take advantage of the vulnerabilities of specific vulnerable groups, such as minors or people with disabilities, to substantially alter their behavior in a way that is likely to cause them physical or psychological harm to them or to other people.
On the contrary, high-risk AI practices will be allowed in the European market as long as they meet certain mandatory requirements and are subjected to an ex ante conformity assessment. It should be noted that the AI providers are regulated with the duty of implementing risk management systems associated with high-risk AI systems.
Risk Management System:
The risk management system will consist of a continuous process that will be carried out throughout the life cycle of a high-risk AI system, which will require periodic systematic updates in order to detect and mitigate the relevant risks of AI systems for health, safety, and fundamental rights. It will consist of the following stages:
- the identification and analysis of known and foreseeable risks linked to each high-risk AI system;
- the estimation and evaluation of the risks that could arise when the high-risk AI system in question is used in accordance with its intended purpose and when it is given a reasonably foreseeable misuse;
- the evaluation of other risks that could arise from the analysis of the data collected with the post-marketing monitoring system referred to in Article 61;
- the adoption of timely risk management measures in accordance with the provisions of the following sections.
Protection of Personal Data:
In terms of personal data protection, users of high-risk AI systems are required to carry out a impact assessment regarding data protection imposed on them by Article 35 of the GDPR, where applicable.
AI Office and Notifying Authorities:
The European AI Office will be the center of AI expertise throughout the European Union. It will play a key role in the implementation of the Act, especially for general-purpose AI, will promote the development and use of reliable AI and international cooperation. The AI Office was created in the European Commission as a center of expertise in AI and forms the basis of a single European AI governance system.
Additionally, each Member State shall appoint or constitute at least one notifying authority that will be responsible for establishing and carrying out the procedures necessary for the assessment, designation, and notification of conformity assessment bodies, as well as for their supervision.
Spain has been the first country in the European Union to constitute the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), whose headquarters will be established in A Coruña.
Sanctions:
Finally, at the level of sanctions, the AI Act provides for administrative fines of various amounts (up to EUR 35,000,000 or, if the offender is a company, up to 7% of the total annual global turnover of the previous financial year, whichever is higher), depending on the severity of the infringement. Member States shall establish the rules on penalties, including administrative fines, and shall take all measures necessary to ensure that they are correctly and effectively implemented.
Next Steps on the AI Act:
Although a final legal-linguistic check remains, its approval seems to be scheduled before the end of the legislature. Additionally, the law must be formally adopted by the Council.
Conclusions:
The EU AI Act represents a significant step towards establishing a clear regulatory framework with the aim of establishing safe and ethical AI. At the time of its entry into force, it will impact providers and users of AI systems in the European Union, ensuring the protection of data and fundamental rights, and fostering trust in artificial intelligences.