aicomply.
EU AI Act Learning Center

Master the EU AI Act

The EU AI Act is the world's first comprehensive AI regulation. Learn everything you need to know about compliance requirements, risk classifications, and implementation strategies through our structured learning modules.

5

Learning Modules

48

Chapters

~10h

Total Content

100%

Open Source

Overview

What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is a comprehensive regulatory framework that establishes harmonized rules for the development, deployment, and use of AI systems within the European Union.

Risk-Based Approach

The AI Act categorizes AI systems into four risk levels: prohibited, high-risk, limited risk, and minimal risk. Requirements are proportional to the level of risk.

Who Must Comply?

Providers, deployers, importers, and distributors of AI systems in the EU market must comply, regardless of where they are established. This includes non-EU companies.

Enforcement

National authorities will enforce the regulation. Penalties can reach up to 35 million or 7% of global annual turnover for the most serious violations.

Risk Categories

AI Risk Classification

The EU AI Act uses a risk-based approach. Different requirements apply based on the risk level of your AI system.

Prohibited

AI practices that are banned entirely due to unacceptable risks

Examples:

  • Social scoring
  • Real-time biometric identification
  • Emotion recognition at work
High Risk

Requires strict compliance measures and conformity assessment

Examples:

  • CV screening
  • Credit scoring
  • Medical devices
Limited Risk

Transparency obligations apply

Examples:

  • Chatbots
  • Emotion detection
  • AI-generated content
Minimal Risk

No specific requirements, voluntary codes of conduct

Examples:

  • Spam filters
  • Video games
  • Inventory management
Time-Sensitive Deadlines

Key Compliance Dates

The EU AI Act has a phased implementation timeline. Know what deadlines apply to you.

CompletedAugust 1, 2024

AI Act Entry into Force

The regulation officially entered into force

CompletedFebruary 2, 2025

Prohibited AI Ban

Prohibitions on unacceptable AI practices take effect

UpcomingAugust 2, 2025

GPAI Rules Apply

General-purpose AI model requirements become mandatory

FutureAugust 2, 2027

Full Enforcement

All high-risk AI requirements fully applicable

Training

Learning Modules

Complete these 5 comprehensive modules to master EU AI Act compliance. Each module includes lessons, quizzes, and practical guidance.

1
100 min
AI Act Fundamentals
Learn the basics of the EU AI Act including its purpose, scope, key definitions, risk classification framework, prohibited practices, AI literacy obligations, and implementation timeline.

Topics covered

    Start Module
    2
    190 min
    High-Risk AI Compliance
    Master the comprehensive requirements for high-risk AI systems including risk management, data governance, technical documentation, logging, transparency, human oversight, conformity assessment, provider/deployer obligations, importer/distributor obligations, and transparency deep-dive.

    Topics covered

      Start Module
      3
      115 min
      GPAI Compliance
      Understand General-Purpose AI model requirements including provider obligations, systemic risk classification, downstream relationships, codes of practice, and real-world implementation strategies.

      Topics covered

        Start Module
        4
        94 min
        Governance & Penalties
        Navigate the EU AI Act's governance structure, national competent authorities, penalty framework, market surveillance, post-market monitoring, and develop effective compliance strategies.

        Topics covered

          Start Module
          5
          96 min
          Innovation Pathways
          Explore regulatory sandboxes, real-world testing frameworks, codes of conduct, SME and startup support measures, and strategies for balancing innovation with compliance.

          Topics covered

            Start Module

            Key Concepts

            Essential terms and definitions you need to know.

            AI System

            A machine-based system designed to operate with varying levels of autonomy and generate outputs such as predictions, recommendations, or decisions.

            Provider

            A natural or legal person that develops an AI system or has one developed and places it on the market or puts it into service under its own name.

            Deployer

            A natural or legal person using an AI system under their authority, except for personal non-professional activity.

            High-Risk AI

            AI systems that pose significant risks to health, safety, or fundamental rights and require strict compliance measures.

            GPAI

            General-purpose AI models capable of performing a wide range of tasks, subject to specific transparency requirements.

            Conformity Assessment

            The process of demonstrating that a high-risk AI system meets the requirements of the EU AI Act.

            Ready for the next step?

            Assess your AI systems for compliance

            Continue to Assess