Overview
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive, horizontally applicable AI regulation. It establishes a tiered, risk-based regulatory framework that classifies AI systems according to their potential impact on fundamental rights and safety.
The regulation applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or whose outputs are used within the EU, regardless of where the provider is established. This extraterritorial reach means organizations worldwide must assess compliance if their AI systems affect EU residents.
The Act creates a new institutional architecture including a European AI Office, national competent authorities, and a network of AI regulatory sandboxes to support innovation while maintaining safety standards.
Scope
The EU AI Act applies to: providers who develop or have AI systems developed and place them on the EU market or put them into service; deployers who use AI systems under their authority; importers and distributors in the AI supply chain; product manufacturers integrating AI into products covered by existing EU harmonisation legislation; and any entity whose AI system output is used within the EU. It exempts AI systems used exclusively for military, defense, or national security purposes, as well as AI used purely for scientific R&D prior to market placement.
Key Provisions
Bans AI systems that deploy subliminal, manipulative, or deceptive techniques; exploit vulnerabilities of specific groups; perform social scoring by public or private entities; conduct real-time remote biometric identification in public spaces (with narrow law enforcement exceptions); or perform untargeted facial image scraping.
AI systems in areas like biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice must meet strict requirements including risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness standards.
GPAI models must provide technical documentation, training data summaries, and comply with copyright law. Models with systemic risk face additional obligations including adversarial testing, incident monitoring, cybersecurity protections, and energy consumption reporting.
AI systems that interact with people, generate synthetic content, or perform emotion recognition/biometric categorisation must disclose their AI nature to users. Deepfakes and AI-generated text published for public information purposes must be labeled.
Implementation Timeline
August 1, 2024
AI Act enters into force
February 2, 2025
Prohibitions on banned AI practices apply
August 2, 2025
GPAI model obligations apply; Codes of Practice finalised
August 2, 2026
Most high-risk AI system requirements apply
August 2, 2027
Obligations for high-risk AI in Annex I products (e.g., medical devices, machinery)
Compliance Requirements
- Classify all AI systems by risk level (prohibited, high-risk, limited, minimal)
- Implement a risk management system for high-risk AI (Article 9)
- Establish data governance practices for training, validation, and testing datasets (Article 10)
- Prepare and maintain technical documentation (Article 11)
- Implement automatic logging/record-keeping (Article 12)
- Provide transparency information to deployers (Article 13)
- Design systems for effective human oversight (Article 14)
- Ensure accuracy, robustness, and cybersecurity (Article 15)
- Establish a quality management system (Article 17)
- Register high-risk AI systems in the EU database (Article 49)
- For GPAI: provide model cards, training data summaries, comply with copyright
Enforcement Mechanism
Enforcement is shared between the European AI Office (for GPAI models) and national competent authorities (for other AI systems). Penalties are tiered: up to €35 million or 7% of global turnover for prohibited practices; up to €15 million or 3% for other violations; and up to €7.5 million or 1% for providing incorrect information. SMEs and startups benefit from proportionate penalty caps. Market surveillance authorities can order withdrawal or recall of non-compliant AI systems.
Practical Implications
Organizations must conduct a comprehensive AI system inventory, classify each system by risk tier, and implement compliance measures proportionate to the risk level. High-risk system providers face the heaviest burden: full conformity assessments, ongoing monitoring, and incident reporting. The extraterritorial scope means non-EU companies serving EU markets must comply. ISO 42001 certification can demonstrate alignment with Article 17 quality management requirements. Organizations should begin compliance programs immediately given the phased deadlines.
Relation to EU AI Act
This is the EU AI Act itself — the reference framework against which all other global regulations are compared. It serves as the 'gold standard' for comprehensive AI regulation and has influenced legislation worldwide, including Brazil's AI Bill and elements of state-level US laws.