aicomply.
Step 3

Requirements & Obligations

Complete register of EU AI Act requirements and obligations by role. Click any requirement to view its implementation standard and controls.

43

Total Obligations

8

Obligation Categories

43

Linked Standards

103+

Implementation Controls

Enforcement Timeline

2 Feb 2025

Prohibited practices & AI literacy

2 Aug 2025

GPAI & transparency obligations

2 Aug 2026

High-risk system requirements

2 Aug 2027

Annex I product integration

Penalty Framework

EUR 35 million

or 7% global turnover

Prohibited practices (Art. 5)

EUR 15 million

or 3% global turnover

High-risk non-compliance

EUR 7.5 million

or 1.5% global turnover

Information obligations

Prohibited AI Practices

AI practices that are entirely banned under the EU AI Act

Art. 5(1)(a)
Subliminal & Manipulative Techniques

AI systems using subliminal techniques beyond consciousness or manipulative/deceptive techniques to materially distort behaviour causing significant harm

ProhibitedAll operators
View standard & controls
Art. 5(1)(b)
Exploitation of Vulnerabilities

AI systems exploiting vulnerabilities due to age, disability, or social/economic situation to materially distort behaviour causing significant harm

ProhibitedAll operators
View standard & controls
Art. 5(1)(c)
Social Scoring

AI systems evaluating or classifying persons based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment

ProhibitedAll operators
View standard & controls
Art. 5(1)(d)
Individual Predictive Policing

AI systems assessing criminal risk based solely on profiling or personality traits (exceptions for human-assisted assessments based on objective facts)

ProhibitedAll operators
View standard & controls
Art. 5(1)(e)
Untargeted Facial Recognition Scraping

AI systems creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV footage

ProhibitedAll operators
View standard & controls
Art. 5(1)(f)
Emotion Inference in Workplace/Education

AI systems inferring emotions in workplace and education institutions (exceptions for medical or safety purposes)

ProhibitedAll operators
View standard & controls
Art. 5(1)(g)
Biometric Categorisation (Protected Characteristics)

Biometric categorisation systems deducing race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation

ProhibitedAll operators
View standard & controls
Art. 5(1)(h)
Real-Time Remote Biometric Identification

Real-time remote biometric identification in public spaces for law enforcement (narrow exceptions for victim search, imminent threats, serious crime suspects)

ProhibitedLaw enforcement
View standard & controls

High-Risk System Requirements

Technical requirements for high-risk AI systems (Art. 8–15)

Art. 9
Risk Management System

Establish, implement, document, and maintain a continuous iterative risk management system throughout the entire AI lifecycle

High-Risk OnlyProvider14 controls
View standard & controls
Art. 10
Data & Data Governance

Training, validation, and testing datasets must meet quality criteria, be representative, free of errors, and subject to appropriate governance practices

High-Risk OnlyProvider15 controls
View standard & controls
Art. 11
Technical Documentation

Draw up comprehensive technical documentation per Annex IV before placing on market, demonstrating compliance with all requirements

High-Risk OnlyProvider10 controls
View standard & controls
Art. 12
Record-Keeping & Logging

Enable automatic recording of events (logs) over the system's lifetime for risk identification, post-market monitoring, and operation auditing

High-Risk OnlyProvider8 controls
View standard & controls
Art. 13
Transparency & Information to Deployers

Design for sufficient operational transparency; provide instructions for use with performance characteristics, limitations, accuracy metrics, and oversight measures

High-Risk OnlyProvider5 controls
View standard & controls
Art. 14
Human Oversight

Design for effective human oversight: ability to understand, detect anomalies, interpret output, override decisions, and intervene or stop the system

High-Risk OnlyProvider & Deployer4 controls
View standard & controls
Art. 15
Accuracy, Robustness & Cybersecurity

Achieve appropriate levels of accuracy, robustness against errors and attacks (data poisoning, adversarial examples, model evasion), and cybersecurity

High-Risk OnlyProvider8 controls
View standard & controls

Provider Obligations

Obligations for organisations that develop or place high-risk AI systems on the market

Art. 17
Quality Management System

Implement a documented QMS covering compliance strategy, design procedures, testing, data management, risk management, post-market monitoring, and accountability

High-Risk OnlyProvider13 controls
View standard & controls
Art. 43
Conformity Assessment

Undergo conformity assessment before placing on market — internal control (Annex VI) or notified body assessment (Annex VII) for biometric systems

High-Risk OnlyProvider6 controls
View standard & controls
Art. 47–48
EU Declaration & CE Marking

Draw up written EU declaration of conformity, affix CE marking on the system or documentation, and keep documentation for 10 years

High-Risk OnlyProvider
View standard & controls
Art. 49
Registration in EU Database

Register the provider and each high-risk AI system in the EU database before placing on market or putting into service

High-Risk OnlyProvider5 controls
View standard & controls
Art. 72
Post-Market Monitoring

Establish a proportionate post-market monitoring system to actively collect and analyse performance data throughout the system's lifetime

High-Risk OnlyProvider5 controls
View standard & controls
Art. 73
Serious Incident Reporting

Report serious incidents to market surveillance authorities — within 15 days generally, 2 days for widespread infringements, 10 days for deaths

High-Risk OnlyProvider5 controls
View standard & controls
Art. 20–21
Corrective Actions & Cooperation

Take corrective actions for non-conforming systems (withdraw, disable, recall), inform all downstream operators, and cooperate with competent authorities

High-Risk OnlyProvider
View standard & controls
Art. 18–19
Documentation & Log Retention

Keep technical documentation, QMS records, and conformity certificates for 10 years; retain automatically generated logs for at least 6 months

High-Risk OnlyProvider
View standard & controls

Deployer Obligations

Obligations for organisations that use high-risk AI systems

Art. 26(1),(4)
Use per Instructions & Input Data

Use systems in accordance with instructions for use; ensure input data is relevant and sufficiently representative for the intended purpose

High-Risk OnlyDeployer
View standard & controls
Art. 26(2)
Human Oversight Assignment

Assign human oversight to persons with necessary competence, training, authority, and support to effectively oversee the AI system

High-Risk OnlyDeployer
View standard & controls
Art. 26(5)
Monitor, Report & Suspend

Monitor operation based on instructions; inform providers of risks; suspend use if risk identified; report serious incidents to authorities

High-Risk OnlyDeployer
View standard & controls
Art. 26(6)
Log Retention

Keep automatically generated logs for at least 6 months (or longer if required by law), to the extent logs are under deployer's control

High-Risk OnlyDeployer
View standard & controls
Art. 26(7),(11)
Inform Workers & Affected Persons

Inform workers' representatives and affected workers before workplace deployment; inform natural persons that they are subject to AI system decisions

High-Risk OnlyDeployer
View standard & controls
Art. 27
Fundamental Rights Impact Assessment

Public bodies and certain private deployers must assess impact on fundamental rights before deployment, covering processes, affected persons, risks, and oversight

High-Risk OnlyDeployer (public bodies)
View standard & controls
Art. 86
Right to Explanation

Affected persons subject to AI-based decisions with legal effects have the right to clear and meaningful explanations of the AI system's role in the decision

High-Risk OnlyDeployer
View standard & controls

Start tracking your compliance

Add AI systems to your inventory first, then track requirements for each system.

Go to Inventory