Requirements & Obligations
Complete register of EU AI Act requirements and obligations by role. Click any requirement to view its implementation standard and controls.
Total Obligations
Obligation Categories
Linked Standards
Implementation Controls
Enforcement Timeline
Prohibited practices & AI literacy
GPAI & transparency obligations
High-risk system requirements
Annex I product integration
Penalty Framework
EUR 35 million
or 7% global turnover
Prohibited practices (Art. 5)
EUR 15 million
or 3% global turnover
High-risk non-compliance
EUR 7.5 million
or 1.5% global turnover
Information obligations
General Obligations
Applies to all AI systems regardless of risk level
Prohibited AI Practices
AI practices that are entirely banned under the EU AI Act
AI systems using subliminal techniques beyond consciousness or manipulative/deceptive techniques to materially distort behaviour causing significant harm
AI systems exploiting vulnerabilities due to age, disability, or social/economic situation to materially distort behaviour causing significant harm
AI systems evaluating or classifying persons based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment
AI systems assessing criminal risk based solely on profiling or personality traits (exceptions for human-assisted assessments based on objective facts)
AI systems creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV footage
AI systems inferring emotions in workplace and education institutions (exceptions for medical or safety purposes)
Biometric categorisation systems deducing race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation
Real-time remote biometric identification in public spaces for law enforcement (narrow exceptions for victim search, imminent threats, serious crime suspects)
High-Risk System Requirements
Technical requirements for high-risk AI systems (Art. 8–15)
Establish, implement, document, and maintain a continuous iterative risk management system throughout the entire AI lifecycle
Training, validation, and testing datasets must meet quality criteria, be representative, free of errors, and subject to appropriate governance practices
Draw up comprehensive technical documentation per Annex IV before placing on market, demonstrating compliance with all requirements
Enable automatic recording of events (logs) over the system's lifetime for risk identification, post-market monitoring, and operation auditing
Design for sufficient operational transparency; provide instructions for use with performance characteristics, limitations, accuracy metrics, and oversight measures
Design for effective human oversight: ability to understand, detect anomalies, interpret output, override decisions, and intervene or stop the system
Achieve appropriate levels of accuracy, robustness against errors and attacks (data poisoning, adversarial examples, model evasion), and cybersecurity
Provider Obligations
Obligations for organisations that develop or place high-risk AI systems on the market
Implement a documented QMS covering compliance strategy, design procedures, testing, data management, risk management, post-market monitoring, and accountability
Undergo conformity assessment before placing on market — internal control (Annex VI) or notified body assessment (Annex VII) for biometric systems
Draw up written EU declaration of conformity, affix CE marking on the system or documentation, and keep documentation for 10 years
Register the provider and each high-risk AI system in the EU database before placing on market or putting into service
Establish a proportionate post-market monitoring system to actively collect and analyse performance data throughout the system's lifetime
Report serious incidents to market surveillance authorities — within 15 days generally, 2 days for widespread infringements, 10 days for deaths
Take corrective actions for non-conforming systems (withdraw, disable, recall), inform all downstream operators, and cooperate with competent authorities
Keep technical documentation, QMS records, and conformity certificates for 10 years; retain automatically generated logs for at least 6 months
Deployer Obligations
Obligations for organisations that use high-risk AI systems
Use systems in accordance with instructions for use; ensure input data is relevant and sufficiently representative for the intended purpose
Assign human oversight to persons with necessary competence, training, authority, and support to effectively oversee the AI system
Monitor operation based on instructions; inform providers of risks; suspend use if risk identified; report serious incidents to authorities
Keep automatically generated logs for at least 6 months (or longer if required by law), to the extent logs are under deployer's control
Inform workers' representatives and affected workers before workplace deployment; inform natural persons that they are subject to AI system decisions
Public bodies and certain private deployers must assess impact on fundamental rights before deployment, covering processes, affected persons, risks, and oversight
Affected persons subject to AI-based decisions with legal effects have the right to clear and meaningful explanations of the AI system's role in the decision
Importer & Distributor Obligations
Obligations for organisations importing or distributing AI systems in the EU
Verify conformity assessment, technical documentation, CE marking, EU declaration, and authorised representative appointment before placing on market
Verify CE marking, EU declaration, instructions for use, and that provider and importer have met their obligations before making available on market
Any actor putting their name on a high-risk system, making substantial modifications, or changing its intended purpose becomes the provider with full obligations
Transparency Obligations
Disclosure requirements for all AI systems interacting with persons or generating content
AI systems interacting directly with persons must be designed so users are informed they are interacting with an AI system (unless obvious from context)
AI systems generating synthetic audio, image, video, or text must mark outputs in machine-readable format as artificially generated or manipulated
Deployers of emotion recognition or biometric categorisation systems must inform exposed persons and process data in accordance with GDPR
Deployers must disclose when content (image, audio, video, text on public interest matters) has been artificially generated or manipulated
General-Purpose AI (GPAI) Obligations
Requirements for providers of general-purpose AI models, including those with systemic risk
Draw up and maintain technical documentation of the model including training/testing process and evaluation results per Annex XI
Provide documentation enabling downstream AI system providers to understand model capabilities and limitations per Annex XII
Implement copyright compliance policy respecting rights reservations; publish sufficiently detailed summary of training content per AI Office template
Perform model evaluation using standardised protocols, conduct adversarial testing, and assess/mitigate possible systemic risks at Union level
Track, document and report serious incidents to the AI Office without undue delay; ensure adequate cybersecurity for model and physical infrastructure
Start tracking your compliance
Add AI systems to your inventory first, then track requirements for each system.