The Global Landscape of AI Regulation: An Analysis of Three Key Players

Gurkan Coskun
5 min readJan 26, 2024

--

a lawyer concentrated on artificial intelligence
Regulating AI is a great way to keep lawyers and consultants busy for the next decade.

In this article, we look at three key initiatives in regulating AI globally that should be followed in 2024. These initiatives are the EU AI Act, the United States AI Bill of Rights, and the United Kingdom AI Regulation Policy Paper.

European Union Artificial Intelligence Act

The European Union Artificial Intelligence Act (EU AI Act) is an important piece of legislation that will soon regulate artificial intelligence (AI) systems available on the EU market and will affect providers of AI systems that place them on the EU market, as well as users of AI systems located in the EU, regardless of whether they are located in the EU.

The EU AI Act, which is currently under review by the parliament, is expected to be enforced in 2024 and set a global standard for regulating the development and use of AI. This act was first proposed by the European Commission in April 2021 and will be the first act to comprehensively regulate the development and use of AI worldwide. The European Commission has also requested the European standards development organizations (CEN/CENELEC) to develop technical standards for compliance with the AI Act in parallel with the legislative process.

The AI Act of the European Union adopts a risk-based strategy in which AI systems are categorized as low or minimal risk, limited risk, high risk or unacceptable risk. High-risk systems are those that can potentially have a significant impact on a user’s life and therefore, they are subject to specific requirements.

High-risk AI systems can be classified into two categories. The first category includes systems that are safety components or products governed by existing safety standards and evaluations. Examples of such systems are toys and medical devices. The second category includes systems that are used for specific sensitive purposes. Generally, they fall within the following eight areas:

  • Biometric and biometric-based systems
  • Systems for critical infrastructure, education and vocational training systems
  • Systems influencing employment, worker management and access to self-employment
  • Systems affecting access and use of private and public services and benefits
  • Systems used in law enforcement
  • Systems used in migration, asylum and border control management
  • Systems used in the administration of justice and democratic processes

It is crucial to consider the impact of three important pieces of EU legislation: the Digital Markets Act (DMA), the Digital Services Act, and the EU Artificial Intelligence (AI) Act, as all of them cover AI technologies. These acts aim to prevent companies from misusing AI or exploiting uncontrolled innovative technology to promote harm. They also seek to standardize a risk management approach to AI governance. Although each of these acts has different objectives and implementation mechanisms, transparency is a central theme that cannot be overlooked. Companies and organizations that meet certain criteria must conduct independent audits, conformity assessments, and/or third-party audits to ensure compliance and avoid unprecedented fines.

In addition, the EU AI Act, together with other EU legislation on AI and algorithms, such as the EU AI Liability Directive, will ensure accountability and make it easier to establish liability in the event of harm.

All organizations operating in or selling to Europe should be aware of the far-reaching implications of these acts and be prepared to comply with their provisions.

United Kingdom AI Regulation Policy Paper

The UK government has not yet introduced specific legislation to regulate the use of AI. However, the UK government has shown its support for the regulation of AI systems through various policy documents, frameworks, and strategies. It is not certain whether the UK will implement the EU AI Act throughout the continent.

On July 18, 2022, the Department for Business, Energy, and Industrial Strategy, the Office of Artificial Intelligence, and DCMS (Department for Digital, Culture, Media & Sport) released a policy paper titled “Establishing a pro-innovation approach to the regulation of AI.” According to this framework, the regulation of AI in the UK will be context-specific and based on the use and impact of the technology. The responsibility for developing appropriate strategies will be delegated to the relevant regulatory body or bodies. The Government will define AI broadly to guide regulators, adopting key principles relating to transparency, fairness, justice, safety, security, privacy, accountability, and redress or competition mechanisms.

The framework is based on four principles:

1. Context specificity — AI should be regulated according to its use and impact.

2. Innovation and risk-based approach — There will be a focus on high-risk concerns rather than hypothetical or low risks to encourage innovation and limit barriers.

3. Consistency — A set of cross-sector principles tailored to the characteristics of AI will be established, and regulators will interpret and apply them within their own sectors and domains.

4. Proportionate and adaptable — Cross-sectoral principles will initially be set on a non-legislative basis to ensure a dynamic approach to regulation.

United States AI Bill of Rights

In 2022, the White House proposed an Artificial Intelligence Bill of Rights to provide guidance for the development, deployment, and design of AI systems. Although non-binding, this declaration can be voluntarily adopted by designers, developers, and deployers to protect Americans from potential harm caused by the use of AI.

The declaration is based on five principles that aim to address concerns regarding AI:

1. Safe and effective systems — ensuring that users are protected from using unsafe and ineffective AI systems.

2. Protection against algorithmic discrimination — preventing AI algorithms from unfairly discriminating against certain groups.

3. Data privacy — providing built-in protection against malicious data practices, and giving users the authority to authorize the use of their data.

4. Notification and disclosure — notifying users when an automated system is being used, and revealing the relevant results.

5. Human alternatives, evaluation, and fallback — ensuring that users have the ability to opt-out of automated systems and get support to solve problems they encounter.

Furthermore, the National Institute of Standards and Technology (NIST), a US government agency, plays an essential role in developing AI standards and conducting research related to the use and deployment of AI.

--

--

Gurkan Coskun
Gurkan Coskun

Written by Gurkan Coskun

Lawyer, PhD. Writing about Digital Law and AI.

No responses yet