AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesGlobal RegulationsEU AI Act
EnactedEuropean Union

EU AI Act

Artificial Intelligence Act

Effective:Phased: Feb 2025 - Aug 2027
Philosophy:Fundamental Rights & Safety
Penalties:Up to €35M or 7% of global annual turnover
Full TextEUR-Lex

Overview

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive, horizontally applicable AI regulation. It establishes a tiered, risk-based regulatory framework that classifies AI systems according to their potential impact on fundamental rights and safety.

The regulation applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or whose outputs are used within the EU, regardless of where the provider is established. This extraterritorial reach means organizations worldwide must assess compliance if their AI systems affect EU residents.

The Act creates a new institutional architecture including a European AI Office, national competent authorities, and a network of AI regulatory sandboxes to support innovation while maintaining safety standards.

Scope

The EU AI Act applies to: providers who develop or have AI systems developed and place them on the EU market or put them into service; deployers who use AI systems under their authority; importers and distributors in the AI supply chain; product manufacturers integrating AI into products covered by existing EU harmonisation legislation; and any entity whose AI system output is used within the EU. It exempts AI systems used exclusively for military, defense, or national security purposes, as well as AI used purely for scientific R&D prior to market placement.

Key Provisions

1Prohibited AI Practices (Title II)

Bans AI systems that deploy subliminal, manipulative, or deceptive techniques; exploit vulnerabilities of specific groups; perform social scoring by public or private entities; conduct real-time remote biometric identification in public spaces (with narrow law enforcement exceptions); or perform untargeted facial image scraping.

2High-Risk AI Systems (Title III)

AI systems in areas like biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice must meet strict requirements including risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness standards.

3General-Purpose AI (Title V)

GPAI models must provide technical documentation, training data summaries, and comply with copyright law. Models with systemic risk face additional obligations including adversarial testing, incident monitoring, cybersecurity protections, and energy consumption reporting.

4Transparency Obligations (Title IV)

AI systems that interact with people, generate synthetic content, or perform emotion recognition/biometric categorisation must disclose their AI nature to users. Deepfakes and AI-generated text published for public information purposes must be labeled.

Implementation Timeline

August 1, 2024

AI Act enters into force

February 2, 2025

Prohibitions on banned AI practices apply

August 2, 2025

GPAI model obligations apply; Codes of Practice finalised

August 2, 2026

Most high-risk AI system requirements apply

August 2, 2027

Obligations for high-risk AI in Annex I products (e.g., medical devices, machinery)

Compliance Requirements

  • Classify all AI systems by risk level (prohibited, high-risk, limited, minimal)
  • Implement a risk management system for high-risk AI (Article 9)
  • Establish data governance practices for training, validation, and testing datasets (Article 10)
  • Prepare and maintain technical documentation (Article 11)
  • Implement automatic logging/record-keeping (Article 12)
  • Provide transparency information to deployers (Article 13)
  • Design systems for effective human oversight (Article 14)
  • Ensure accuracy, robustness, and cybersecurity (Article 15)
  • Establish a quality management system (Article 17)
  • Register high-risk AI systems in the EU database (Article 49)
  • For GPAI: provide model cards, training data summaries, comply with copyright

Enforcement Mechanism

Enforcement is shared between the European AI Office (for GPAI models) and national competent authorities (for other AI systems). Penalties are tiered: up to €35 million or 7% of global turnover for prohibited practices; up to €15 million or 3% for other violations; and up to €7.5 million or 1% for providing incorrect information. SMEs and startups benefit from proportionate penalty caps. Market surveillance authorities can order withdrawal or recall of non-compliant AI systems.

Practical Implications

Organizations must conduct a comprehensive AI system inventory, classify each system by risk tier, and implement compliance measures proportionate to the risk level. High-risk system providers face the heaviest burden: full conformity assessments, ongoing monitoring, and incident reporting. The extraterritorial scope means non-EU companies serving EU markets must comply. ISO 42001 certification can demonstrate alignment with Article 17 quality management requirements. Organizations should begin compliance programs immediately given the phased deadlines.

Relation to EU AI Act

This is the EU AI Act itself — the reference framework against which all other global regulations are compared. It serves as the 'gold standard' for comprehensive AI regulation and has influenced legislation worldwide, including Brazil's AI Bill and elements of state-level US laws.

Key Features

Risk-based classification (Prohibited, High-Risk, Limited, Minimal)
Prohibited practices include social scoring by any entity, manipulative AI, real-time biometric ID
High-risk systems require conformity assessments, technical documentation, human oversight
GPAI models must provide training data summaries and comply with copyright law
Systemic risk models require additional safety evaluations and incident reporting
All RegulationsBrazil AI ActNext
AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms