How the EU AI Act Regulates Artificial Intelligence Based on Risk Levels
The AI Act is a legal framework that regulates the sale and use of artificial intelligence in the European Union (EU). It aims to ensure the proper functioning of the EU’s internal market by setting consistent standards for AI systems across EU Member States. It is the first comprehensive regulation that addresses the risks of artificial intelligence through a set of obligations and requirements. The main objective is to protect the health, safety and fundamental rights of EU citizens and beyond. The AI Act is expected to have a significant impact on AI governance worldwide.
The AI Act is part of a broader digital rulebook that regulates various aspects of the digital economy, including the General Data Protection Regulation, the Digital Services Act and the Digital Markets Act. Please note that the AI Act does not address data protection, online platforms or content moderation. While the interaction between the AI Act and existing EU legislation poses its own challenges, building on existing laws allows the EU to avoid a “one law fixes all” approach to this emerging technology.
The European Union Artificial Intelligence Act (EU AIA) differs from other proposed AI regulations in its risk-based approach. Under this approach, the obligations imposed on providers of AI systems are proportionate to the level of risk posed by the system.
At the heart of the proposal is a risk categorisation system that regulates AI systems based on the level of risk they pose to the health, safety and fundamental rights of individuals. Four categories of risk have been identified: unacceptable, high, limited and minimal/none.
Oversight and regulation under the AI Act will focus primarily on the unacceptable and high risk categories, and this is the main topic of discussion below. The exact categorisation of different types of AI systems is yet to be determined. It is also possible that, in practice, an AI system may fall into more than one category.
Unacceptable Risk Systems
The use of AI systems that pose an unacceptable risk is strictly prohibited. According to the consensus reached by the three proposals, such systems include those that have the potential to manipulate individuals through subconscious messaging or by exploiting their vulnerabilities based on socioeconomic status, age, or disability. Additionally, AI systems that are designed for social scoring, which evaluates and treats people based on their social behavior, are also banned. The European Parliament has also expressed its intention to prohibit real-time remote biometric identification in public spaces, such as live facial recognition systems, as well as other biometric and law enforcement use cases.
High Risk Systems
High-risk systems that pose a significant risk to health, safety, fundamental rights or the environment will be subject to the most stringent obligations. The Act also introduces a new feature, allowing providers of high-risk systems to notify the competent authorities if they do not believe that their systems pose significant risks. Once they receive the notification, competent authorities have a period of three months to review and object if they consider that the system poses a significant risk.
High-risk AI systems can be classified into two categories. The first category includes systems that are safety components or products governed by existing safety standards and evaluations. Examples of such systems are toys and medical devices. The second category includes systems that are used for specific sensitive purposes. Generally, they fall within the following eight areas.
1. Biometrics
Biometric and biometric-based systems are technologies that are used to identify individuals based on their biological and behavioral characteristics. These systems can also be used to infer personal characteristics of individuals from their biometric data, such as emotion recognition systems. However, it’s important to note that this definition does not include authentication systems that are used to confirm the identity of a specific person.
2. Critical infrastructure
This category has an additional criterion for a system to be considered high-risk. It is determined by whether or not it poses a significant risk of harm to the environment. Systems that fall under this category include those used in the management and operation of road, rail, and air traffic (unless they are regulated by harmonised or sector-specific legislation). It also includes systems intended to be used as safety components for the management and operation of water, gas, heating, electricity, or critical digital infrastructure supply.
3. Education and vocational training
The proposed legislation aims to regulate the use of systems that impact or determine admission to educational and vocational training institutions. The legislation would also cover systems that assess students during the admission process, as well as systems that determine the appropriate level of education for an individual. Additionally, systems that monitor and identify prohibited student behavior would be classified as high-risk.
4. Employment, workers management and access to self-employment
Systems that are intended to be used for recruitment or selection are considered high risk. These systems include those that place targeted job advertisements, assess performance in an interview or test, screen or filter applications, or assess candidates in tests or interviews. Apart from recruitment systems, systems that make decisions about promotion, termination, and job allocation based on personal characteristics or behavior, and systems used to monitor and evaluate performance and behavior, are also considered high risk.
5. Access to essential services
Public authorities use or authorize artificial intelligence systems to assess eligibility for benefits and services such as healthcare, housing, electricity, heating/cooling, and internet. This also includes credit scoring systems, except those that prevent financial fraud or influence decisions on health and life insurance eligibility. Additionally, it involves systems that handle emergency calls or assign or prioritize dispatch of first responders such as police, fire, and medical and emergency medical services.
6. Law enforcement
Artificial intelligence systems are being increasingly used by law enforcement agencies and EU institutions to enhance their investigative and crime-solving capabilities. These systems are employed in various ways, such as in polygraphs or other tools used to assess the reliability of evidence during the prosecution of crimes, to profile individuals during investigations, or in crime analytics to search large datasets for unknown patterns and relationships.
7. Migration, asylum and border control management
These are systems utilized by public authorities or EU institutions to evaluate security, health, or the risks of irregular migration for individuals entering a Member State. Their purpose is to verify the authenticity of travel documents and assess applications related to asylum, visas, and residence permits, including complaints. Examples of such systems are polygraphs or similar tools. Additionally, these systems include those used for border management activities such as monitoring, surveillance, and data processing to detect, recognize, or identify individuals. Furthermore, systems that predict or anticipate trends in migration movements and border crossings also fall under this category.
8. Administration of justice and democratic processes
These are AI systems that are utilized by a judicial authority or on its behalf to assist in interpreting and investigating facts, applying law to a set of facts, and analyzing the law. This category also includes systems that are designed to influence the voting behavior of individuals or the outcome of an election or referendum, except for systems whose outputs are not exposed to individuals directly. Such systems may be used for organizing, optimizing, and structuring political campaigns.
Limited and Minimal Risk Systems
The limited risk category is for AI systems that have limited potential for manipulation and are required to be transparent. This category includes systems that inform users about their interactions with the AI system and flag any artificially created or manipulated content. An AI system is considered to be of minimal or no risk if it does not fall into any of the other categories.