The AI Act

The EU’s AI Act is the first global legal framework dedicated to artificial intelligence, ensuring safe, ethical, and transparent AI by classifying systems by risk and setting strong compliance standards.

The AI Act

Overview of the AI Act

The Artificial Intelligence Act (AI Act) is a major regulatory framework introduced by the European Union (EU) to oversee the development and use of artificial intelligence technologies. Approved in May 2024, it is the first global legal framework dedicated to AI. The Act’s main goal is to ensure a safe and reliable environment for AI by addressing ethical, social, and technical challenges. It aligns AI development with European values, focusing on transparency, fairness, and accountability.

The AI Act stands out due to its wide scope, regulating AI systems based on risk levels and applications. It applies to AI systems created within the EU and those marketed, used, or whose outputs affect the EU. This approach ensures that any AI impacting EU citizens or businesses meets the same high standards, regardless of where it is developed.

EU protected against AI

Scope and Coverage of the AI Act

The AI Act covers various stakeholders in the AI ecosystem, including:

  • Providers: Organizations that develop and supply AI systems under their brand. Providers must ensure their AI systems meet the Act’s requirements before entering the EU market.
  • Deployers: Businesses or individuals using AI systems. They must align their use of AI with the regulatory framework, especially for high-risk AI applications.
  • Importers and Distributors: Entities that bring AI systems into the EU or distribute them within the region must follow specific rules to ensure compliance.
  • Manufacturers: Companies that integrate AI into their products are also subject to the Act if their products are sold in the EU.

The Act defines AI systems broadly, including machine-based systems that work autonomously to produce outputs like predictions, recommendations, or decisions. Some categories of AI, such as those used for scientific research and development (R&D) or in controlled testing environments before deployment, are exempt from the Act.

Key Features of the AI Act

Risk-Based Approach

The AI Act uses a risk-based classification system to regulate AI systems. It categorizes them into four levels of risk:

  1. Unacceptable Risk: AI systems that are harmful or go against EU values are banned. For example, this includes technologies like social scoring systems or AI that manipulates human behavior to cause harm.
  2. High Risk: Systems in this category include biometric identification tools, medical devices, and critical infrastructure management. They must meet strict requirements, such as undergoing testing, maintaining documentation, and ensuring human oversight.
  3. Limited Risk: These systems, often used for educational or entertainment purposes, must meet transparency requirements, like informing users they are interacting with AI.
  4. Minimal or No Risk: Most AI applications, such as recommendation engines for e-commerce, fall into this category and face little or no regulation.

Governance and Compliance

The AI Act establishes a governance system to ensure compliance. This includes:

  • The European Artificial Intelligence Board: Coordinates the application of the AI Act across EU member states, ensuring consistency and providing guidance.
  • National Supervisory Authorities: Each EU member state must appoint authorities to monitor and enforce compliance within their regions.
  • Extraterritorial Application: The Act applies to any AI system that impacts the EU, even if it was developed outside the region.

Provisions for Generative AI

The Act includes specific rules for generative AI systems like ChatGPT. Developers of these systems must meet transparency and safety requirements, such as disclosing training methods, datasets used, and potential biases.

Categorizing AI: The Risk-Based System

Risk-Based Classification System

The European Union’s Artificial Intelligence Act (AI Act) uses a risk-based classification system to regulate AI technologies. This system matches the level of regulatory oversight with the potential risks posed by AI applications. By dividing AI systems into four specific risk levels—Unacceptable, High, Limited, and Minimal or No Risk—the EU aims to balance technological progress with public safety and ethical standards. Each category includes specific regulatory requirements and responsibilities for developers, deployers, and other stakeholders involved in AI.

Risk Levels and Their Implications

Unacceptable Risk

AI systems under the “Unacceptable Risk” category are seen as direct threats to fundamental rights, safety, or EU values. These systems are banned under the AI Act because of their harmful nature. Examples include:

  • Subliminal Manipulation: Systems that covertly influence human behavior to cause harm, such as manipulating voters’ decisions without their knowledge.
  • Exploitation of Vulnerabilities: AI targeting individuals for harm based on vulnerabilities like age, disability, or economic status. For instance, interactive toys that encourage unsafe behavior in children.
  • Social Scoring: Systems ranking individuals based on behavior or characteristics, such as scoring creditworthiness based on social media activity, leading to unfair outcomes.
  • Real-Time Biometric Identification in Public Spaces: Facial recognition systems used for surveillance, except in specific cases like law enforcement with judicial approval.
  • Emotion Recognition and Biometric Categorization: AI that infers sensitive details such as ethnicity or political affiliations, especially in sensitive environments like workplaces or schools.

These prohibitions reflect the EU’s commitment to ethical AI that respects human rights.

High Risk

High-risk AI systems significantly affect health, safety, or fundamental rights. These systems are not banned but must meet strict requirements to ensure they are transparent and accountable. Examples include:

  • Critical Infrastructure: AI managing essential systems like transportation, where failures could risk lives.
  • Education and Employment: Systems affecting access to education or jobs, such as algorithms that grade exams or filter job applications.
  • Healthcare: AI integrated into medical equipment or decision-making, such as in robot-assisted surgery.
  • Public Services: Tools determining eligibility for loans or public benefits.
  • Law Enforcement and Border Control: AI used in criminal investigations or visa processing.

Developers and deployers of high-risk AI must follow strict standards, such as maintaining thorough documentation, ensuring human oversight, and conducting conformity assessments to reduce risks.

Limited Risk

Limited-risk AI systems have moderate potential risks. These systems must meet transparency requirements to ensure users are aware of their interactions with AI. Examples include:

  • chatbots: Systems required to inform users that they are not human.
  • Recommendation Algorithms: AI suggesting products, entertainment, or content for users.

Although these systems involve lower risks, the AI Act enforces basic ethical standards to build trust and accountability.

Minimal or No Risk

Most AI applications, including general-purpose tools like language translation and search engines, fall under this category. These systems face minimal or no regulatory constraints, allowing innovation to progress freely. Examples include productivity tools powered by AI and virtual assistants for personal use.

Provisions for Generative AI Systems

The AI Act includes specific measures for generative AI systems, such as ChatGPT and DALL-E, that produce text, images, or code. These systems are classified based on their intended use and potential impact. Key provisions include:

  • Transparency Requirements: Developers must disclose the datasets used for training and indicate when content is AI-generated.
  • Safety and Ethical Guidelines: Generative AI must reduce biases, prevent misinformation, and align with ethical standards.
  • Accountability Measures: Companies must provide detailed documentation about the model’s architecture, intended use, and limitations.

Ensuring Compliance: Governance Framework

The Role of Governance in the EU’s AI Act

The European Union’s Artificial Intelligence Act (AI Act) introduces a governance framework to ensure its rules are followed. This framework promotes transparency, accountability, and uniform application across Member States. It also protects fundamental rights while encouraging the development of reliable AI technologies. Central to this framework are the European Artificial Intelligence Board (EAIB) and the European AI Office, which work with national authorities to enforce and monitor the AI Act.

European Artificial Intelligence Board (EAIB)

The European Artificial Intelligence Board (EAIB) is the main body governing the AI Act. It acts as an advisory and coordinating authority to ensure the law is applied consistently across the EU.

Core Responsibilities

  • Coordination and Oversight:
    The EAIB works to align the efforts of national authorities responsible for enforcing the AI Act. Its goal is to ensure Member States regulate AI in a consistent way, reducing differences in interpretation and enforcement.
  • Guidelines and Recommendations:
    The Board provides advice on applying the AI Act. It develops guidelines, drafts delegated acts, and creates other regulatory tools. These resources clarify the Act’s rules, making them easier to follow and enforce.
  • Policy Development:
    The EAIB contributes to shaping Europe’s AI policies. It provides guidance on innovation strategies, international collaborations, and other initiatives to keep the EU competitive in AI technology.

Governance Structure

The EAIB is made up of representatives from each EU Member State and is supported by the European AI Office, which functions as its Secretariat. Observers, such as the European Data Protection Supervisor and representatives from EEA-EFTA countries, also attend Board meetings. Sub-groups within the EAIB focus on specific areas of policy, encouraging collaboration and sharing best practices.

European AI Office

The European AI Office is the EU’s main hub for AI governance. It works closely with the EAIB and Member States to support the implementation of the AI Act. Its role is to ensure AI technologies are developed safely and responsibly.

Key Functions

  • Expertise and Support:
    The AI Office serves as the EU’s center of knowledge on AI. It provides technical and regulatory assistance to Member States. It also assesses general-purpose AI models to confirm they meet safety and ethical standards.
  • International Coordination:
    The AI Office promotes global cooperation on AI governance by advocating for the EU’s regulatory approach as an international standard. It also works with scientists, industry representatives, and civil society to shape its policies.
  • Enforcement:
    The Office has the authority to evaluate AI systems, request information, and enforce penalties on providers of general-purpose AI models that do not comply with the AI Act.

Extraterritorial Application of the AI Act

The AI Act applies to entities within the EU and those outside the Union that provide AI systems to the EU market or use AI systems affecting EU citizens. This extraterritorial provision ensures the Act’s high standards are followed globally, setting a model for international AI governance.

Comparisons with GDPR

The AI Act’s governance framework shares similarities with the General Data Protection Regulation (GDPR), particularly in its structure and goals.

  • Centralized Oversight:
    Like the European Data Protection Board established under GDPR, the EAIB provides centralized oversight for AI governance. This ensures consistency across Member States.
  • Extraterritorial Reach:
    Both the AI Act and GDPR extend their rules beyond the EU’s borders, showing the Union’s dedication to setting global standards in digital regulation.
  • Focus on Fundamental Rights:
    Both regulations prioritize protecting fundamental rights. They emphasize transparency, accountability, and ethical considerations in their respective areas.

However, the AI Act addresses challenges specific to AI, such as categorizing risks and regulating generative AI systems.

AI Regulation on a Global Scale

The AI Act as a Global Model

The European Union’s Artificial Intelligence Act (AI Act) sets a global example for how to regulate AI effectively. As the first detailed legal framework for AI, it offers a guide for other regions aiming to handle the ethical, legal, and societal challenges that AI technologies bring. The Act introduces a risk-based classification system, promotes transparency, and focuses on protecting fundamental rights, creating a strong and modern regulatory approach.

The AI Act addresses both the opportunities and risks of AI. For example, it prohibits practices like social scoring and some uses of biometric identification, which sets a clear ethical standard for AI usage. This framework has already influenced discussions in countries like the United States, Canada, and Japan, where policymakers are considering similar strategies for managing AI technologies.

In addition, the Act includes extraterritorial rules. This means companies worldwide, regardless of where they are based, must follow the Act’s requirements if their AI systems affect the EU market or its citizens. This ensures the Act’s influence goes beyond Europe, encouraging international businesses to align with its standards.

International Cooperation in AI Governance

The EU understands that managing AI challenges requires global collaboration. It works with international organizations and other countries to promote consistent AI regulations and ethical standards. Programs like the Organisation for Economic Co-operation and Development’s (OECD) AI Principles and the G7’s AI initiatives already reflect elements of the EU’s framework.

Events like the recent Bletchley Park Summit emphasize the need for global conversations about AI governance. These gatherings bring together policymakers, industry experts, and civil society to discuss shared values and strategies for managing AI technologies. The EU’s active role in these discussions shows its dedication to shaping global AI regulation.

Through international cooperation, the EU seeks to avoid fragmented AI policies across different nations. Instead, it supports a unified approach to ensure AI technologies are safe, ethical, and beneficial for everyone.

The Future of AI in Europe

The AI Act is designed not just to regulate AI but also to boost innovation and competitiveness in the EU’s AI sector. It is supported by initiatives like the AI Innovation Package and the AI Pact, which encourage the development of human-centric AI while promoting investment and research.

Looking ahead, the EU envisions AI technologies becoming a seamless part of society. They aim to use AI to improve productivity and solve complex problems without compromising ethical standards. The Act’s focus on transparency and accountability helps ensure that AI systems remain trustworthy, which builds public confidence in these technologies.

As the global competition for AI leadership continues, the EU’s approach—balancing strong regulations with support for innovation—positions it as a key player in ethical AI development. This strategy benefits European citizens and serves as a model for other countries, encouraging a worldwide shift toward responsible AI governance.

By promoting the AI Act as a global model and encouraging international cooperation, the EU shows its dedication to creating ethical and trustworthy AI systems. This framework addresses current AI challenges and sets the stage for global AI development that is both safe and sustainable.

Frequently asked questions

What is the EU AI Act?

The EU AI Act is a comprehensive regulatory framework introduced by the European Union to govern the development and use of artificial intelligence technologies. It is the first global legal framework dedicated to AI, focusing on transparency, safety, and ethical standards.

How does the AI Act classify AI systems?

The AI Act employs a risk-based classification system, dividing AI systems into four categories: Unacceptable Risk (banned uses), High Risk (strict requirements), Limited Risk (transparency obligations), and Minimal or No Risk (few or no restrictions).

Who must comply with the AI Act?

All stakeholders in the AI ecosystem—including providers, deployers, importers, distributors, and manufacturers—must comply if their AI systems are used in the EU or impact EU citizens, regardless of where the system was developed.

What does the AI Act require for generative AI?

Developers of generative AI, such as ChatGPT, must meet transparency and safety requirements, including disclosing training methods, datasets, and potential biases, as well as indicating when content is AI-generated.

Does the AI Act apply outside the EU?

Yes, the AI Act has extraterritorial reach. It applies to any AI system that impacts the EU market or its citizens, even if the system is developed or deployed outside the EU.

Ready to build your own AI?

Smart Chatbots and AI tools under one roof. Connect intuitive blocks to turn your ideas into automated Flows.

Learn more