Decoding the AI Act

Decoding the AI Act

The AI Act (or Artificial Intelligence Regulation) is the world’s first regulatory framework designed to oversee artificial intelligence (AI). This legislation, adopted by the European Union, is based on a risk-based approach, aiming to balance technological innovation with fundamental rights protection. Its objective is to ensure a high level of security, particularly in sensitive areas such as healthcare and individual freedoms, while allowing room for AI technology development.

A risk-based approach

Rather than regulating AI technologies directly, the AI Act focuses on the risks they generate. The higher the risk level, the stricter the regulatory obligations. The regulation defines four risk categories:

  1. Unacceptable risk (banned AI)
  2. High risk
  3. Limited risk
  4. Minimal risk

AI with unacceptable risk: total ban

Some AI applications are deemed incompatible with the values and fundamental rights of the European Union and are therefore completely banned. Examples of prohibited AI include:

  • Social scoring, which evaluates and ranks individuals based on their behavior, similar to what exists in China.
  • Employee emotion recognition systems used in companies to measure productivity.
  • Certain large-scale biometric surveillance systems or AI used for behavioral manipulation.

These bans have been in effect since February 2, 2025.

AI with high risk: strict regulations

AI systems classified as high-risk are allowed but must comply with strict transparency and security requirements. This includes two main categories:

  • AI embedded in regulated products: Systems used in vehicles, medical devices, toys, etc.
  • AI deployed in sensitive fields: Education, justice, employment, law enforcement, critical infrastructures, and public services.

Companies developing or using these technologies must:

  • Conduct risk assessments and ensure transparency.
  • Implement control and audit systems.
  • Maintain detailed documentation to prove compliance.

Both AI providers (developers and vendors) and deployers (businesses using AI) must adhere to these requirements.

AI with limited risk: mandatory transparency

Some AI systems pose lower risks but require transparency obligations, such as:

  • Chatbots: Users must be informed when interacting with AI instead of a human.
  • Generative AI systems: AI used for creating images, videos, or text must clearly indicate that the content was generated by artificial intelligence.

AI with minimal risk: free development

For AI considered to have no significant risk, no specific obligations apply. These systems can be freely developed and used.

Generative AI models and general-purpose AI

The AI Act includes specific regulations for general-purpose AI models, particularly generative AI such as ChatGPT, Mistral AI, or Midjourney.

Two levels of requirements are defined:

  1. Standard obligations for regular AI models: documentation, transparency, and disclosure of operational details.
  2. Enhanced obligations for high-performance models reaching a high computing threshold. These AI models, classified as systemic risk models, must comply with strict rules due to their potential massive influence on information and behavior.

A broad scope of application

The AI Act applies to all players in the AI value chain, including:

  • Providers (companies developing AI systems).
  • Deployers (organizations implementing these systems).
  • Importers and distributors, even outside the EU.

As long as an AI system is used or marketed within the EU, it must comply with these rules, even if developed in a third country.

Additionally, the AI Act covers all industries, with no sectoral distinction. Unlike some industry-specific regulations (such as banking), this legislation applies across all sectors.