Compliance Controls
114 controls across 14 categories mapping EU AI Act obligations to specific implementation measures.
114
Total Controls
14
Categories
3
Critical Risk
65
High Risk
Classification
AI System classification controls
Ensure no AI systems constitute prohibited practices under EU AI Act Article 5 by systematically screening all AI systems against the 8 prohibited practices before development or deployment, thereby preventing legal violations, regulatory penalties, and reputational damage.
Accurately identify AI systems that are safety components of products covered by Union harmonization legislation (Annex I) to ensure they undergo appropriate conformity assessment and meet product safety requirements.
Accurately identify AI systems that fall under Annex III high-risk use cases to ensure they comply with all EU AI Act requirements for high-risk systems, protecting fundamental rights and safety of persons.
Make definitive classification decisions based on comprehensive assessments and maintain complete documentation to ensure traceability, support audits, and enable appropriate compliance obligations for each AI system.
Maintain a comprehensive, current, and accurate register of all AI system classifications to enable effective oversight, compliance monitoring, audit support, and regulatory reporting.
Ensure all AI system classifications remain accurate and current through systematic annual reviews, identifying any changes in AI systems, intended purposes, deployment contexts, or regulatory requirements that necessitate reclassification.
Ensure AI system classifications are reassessed immediately when triggering events occur that may affect the classification, preventing operation of AI systems under incorrect classifications and associated compliance gaps.
Manage classification changes through formal change control to ensure compliance obligations are appropriately updated, gaps are identified and addressed, and all stakeholders are informed of classification changes and their implications.
Maintain complete, organized, and accessible documentation packages for all classification decisions to support audits, regulatory inspections, and demonstrate due diligence in classification processes.
Risk Management
Risk management system controls
Establish a continuous, iterative AI risk management system integrated into the overall enterprise risk management framework to ensure systematic identification, assessment, treatment, and monitoring of AI-related risks throughout the AI system lifecycle in compliance with EU AI Act Article 9(1).
Integrate risk management activities into each phase of the AI system lifecycle to ensure risks are identified, assessed, and managed at appropriate points, maintaining risk traceability from design through decommissioning.
Maintain a comprehensive AI risk register for all AI systems to enable effective risk oversight, tracking, and reporting, ensuring all identified risks are properly documented, assessed, and managed.
Systematically identify known and reasonably foreseeable risks related to health, safety, and fundamental rights for each AI system to ensure comprehensive risk coverage and enable appropriate risk treatment decisions.
Analyze and evaluate identified risks using a consistent risk assessment methodology to determine risk levels, prioritize risks for treatment, and enable informed risk management decisions.
Specifically assess risks of bias and discrimination per EU AI Act Article 10(5) to prevent discriminatory outcomes and ensure fairness across all protected characteristics.
Select and implement appropriate risk treatment strategies for each identified risk to reduce risks to acceptable levels, ensuring critical and high risks are properly mitigated before deployment.
Assess residual risk after control implementation and determine acceptability to ensure no unacceptable risks remain before deployment, protecting health, safety, and fundamental rights.
Implement continuous monitoring of AI risks throughout the operational lifecycle to detect risk indicator threshold breaches, identify emerging risks, and enable timely risk response.
Conduct periodic comprehensive risk reviews to ensure risk register remains current, controls remain effective, and emerging risks are identified and addressed.
Assess the effectiveness of the AI risk management system annually to identify improvement opportunities, ensure continuous improvement, and demonstrate risk management maturity.
Report AI risks to appropriate stakeholders per defined frequency and escalation criteria to ensure timely risk awareness, enable informed decision-making, and support regulatory compliance.
Communicate AI risks to relevant stakeholders including deployers, users, and affected persons to ensure transparency, enable informed use, and comply with EU AI Act transparency obligations.
Data Governance
Data and data governance controls
Define specific data quality requirements for each AI system based on intended purpose and risk level to ensure datasets meet appropriate quality standards before use in AI training, validation, and testing, in compliance with EU AI Act Article 10(2).
Assess data quality against defined requirements before use in AI training to ensure datasets meet quality thresholds and prevent quality issues from affecting AI system performance and compliance.
Continuously monitor data quality during AI system operation to detect quality degradation, identify quality issues early, and enable timely remediation to maintain AI system performance and compliance.
Assess and document that datasets are relevant to intended purpose and geographical/behavioral/functional setting to ensure AI systems are trained on appropriate data that reflects their actual deployment context, in compliance with EU AI Act Article 10(3).
Ensure datasets are sufficiently representative of all persons/situations AI system will encounter to prevent bias and ensure fair treatment across all user groups and scenarios, in compliance with EU AI Act Article 10(3).
Ensure datasets are appropriate considering state of the art and available alternatives to ensure optimal dataset selection and justify dataset choices for AI system development.
Examine training, validation, and testing datasets for possible biases to identify discriminatory patterns before they are learned by AI models, preventing bias propagation to AI system outputs, in compliance with EU AI Act Article 10(4).
Implement appropriate measures to mitigate detected biases in datasets to reduce discriminatory outcomes and improve fairness across all protected characteristics.
Validate that bias mitigation achieves fairness objectives by calculating and verifying fairness metrics meet defined thresholds, ensuring AI systems treat all groups fairly across protected characteristics.
Document complete data lineage from source to AI model to enable traceability, support audits, facilitate troubleshooting, and demonstrate data governance compliance.
Establish and verify data provenance for all AI datasets to ensure legal compliance, protect intellectual property, and enable regulatory compliance, in compliance with GDPR and data protection requirements.
Maintain comprehensive data catalog for all AI datasets to enable data discovery, support data governance, facilitate compliance, and enable efficient data management.
Conduct Data Protection Impact Assessment (DPIA) for high-risk AI systems processing personal data to identify and mitigate privacy risks, ensure GDPR compliance, and protect data subject rights.
Collect and process only data necessary for AI system purpose to comply with GDPR Article 5(1)(c) data minimization principle and reduce privacy risks.
Apply appropriate anonymization or pseudonymization techniques to protect privacy while enabling AI system development, balancing privacy protection with data utility.
Documentation
Technical documentation controls
Create complete technical documentation per Annex IV for all high-risk AI systems to demonstrate compliance with EU AI Act requirements and enable conformity assessment, in compliance with EU AI Act Article 11 and Annex IV.
Ensure technical documentation is clear, comprehensive, and accurate to enable effective use, support conformity assessment, and demonstrate compliance with EU AI Act requirements.
Use standardized templates aligned with Annex IV structure to ensure consistent, complete documentation across all high-risk AI systems and facilitate compliance verification.
Update technical documentation when changes occur to AI system to maintain accuracy and compliance throughout the AI system lifecycle, ensuring documentation reflects current system state.
Maintain comprehensive version control for all technical documentation to enable traceability, support audits, and ensure ability to retrieve historical versions for compliance and troubleshooting.
Review all technical documentation annually for currency and accuracy to ensure documentation remains current, accurate, and compliant with evolving regulations and system changes.
Store technical documentation securely with appropriate access controls to protect sensitive information, ensure availability, and comply with retention requirements per EU AI Act Article 18.
Provide appropriate access to technical documentation per Article 53 to enable authorized access while protecting sensitive information and ensuring regulatory compliance.
Ensure technical documentation is available to competent authorities upon request per Article 53 to enable regulatory oversight and demonstrate compliance.
Ensure all technical documentation is reviewed and approved before use to verify quality, accuracy, completeness, and regulatory compliance.
Logging
Logging and record-keeping controls
Implement automated logging per Article 12(1) to capture all required log elements for high-risk AI systems, enabling traceability, auditability, and regulatory compliance.
Implement robust, tamper-proof logging infrastructure with high availability to ensure reliable log capture, storage, and protection throughout the required retention period (at least 6 months per Article 19(1)).
Retain all logs for minimum 6 months per Article 19(1), and technical documentation for 10 years per Article 18(1), with secure storage, backup, and retrieval capability to support audits, investigations, and regulatory compliance.
Protect logs with encryption, access controls, and audit trails to prevent unauthorized access, modification, or deletion, ensuring log integrity and confidentiality throughout the retention period.
Enable log analysis, monitoring, and investigation support to detect anomalies, support incident response, and provide insights for continuous improvement.
Transparency
Transparency and information controls
Provide comprehensive instructions for use to deployers per Article 13(3) to ensure deployers understand AI system characteristics, capabilities, limitations, and proper use, enabling safe and effective deployment.
Keep Instructions for Use current and accurate by updating them when AI system changes occur, ensuring deployers always have current information about AI system capabilities, limitations, and risks.
Ensure users are aware they are interacting with AI per Article 50 to enable informed decision-making and protect user rights, preventing deception and ensuring transparency.
Design and implement effective transparency notices that are clear, prominent, accessible, and understandable to ensure users are properly informed about AI use.
Enable users to understand AI decisions by providing explanations when requested, supporting user rights to explanation and enabling informed decision-making.
Human Oversight
Human oversight controls
Design AI systems with effective human oversight measures per Article 14(3) and 14(4) to enable humans to understand, interpret, override, and intervene in AI system operations, ensuring human control and preventing automation bias.
Ensure oversight personnel have necessary competence, training, authority, and resources per Article 14(4) to effectively perform oversight functions and make informed decisions about AI system use.
Implement and maintain effective human oversight operations throughout AI system lifecycle, ensuring oversight personnel are available, trained, and performing oversight functions effectively.
Monitor oversight effectiveness, track metrics, analyze patterns, and implement improvements to ensure human oversight remains effective and achieves its objectives.
Accuracy & Security
Accuracy, robustness, and cybersecurity controls
Define accuracy requirements based on intended purpose per Article 15(1) to ensure AI systems achieve appropriate accuracy levels for their use case, enabling safe and effective deployment.
Test AI system accuracy before deployment to verify it meets defined accuracy requirements, ensuring safe and effective operation.
Monitor accuracy in production to detect accuracy degradation, identify issues early, and enable timely corrective actions to maintain AI system performance.
Define robustness requirements per Article 15(4) to ensure AI systems are resilient to errors, faults, inconsistencies, and adversarial conditions, maintaining performance across diverse scenarios.
Test AI system robustness to verify it meets defined robustness requirements, ensuring resilience to errors, faults, and adversarial conditions.
Monitor for data drift and concept drift to detect performance degradation early and enable timely model updates or retraining to maintain AI system performance.
Define cybersecurity requirements and assess AI-specific security threats per Article 15(5) to ensure AI systems are resilient against cybersecurity threats and protect against AI-specific attack vectors.
Test AI system security to verify it meets security requirements and is resilient against cybersecurity threats, including AI-specific attack vectors.
Quality Management
Quality management system controls
Document QMS systematically and orderly per Article 17(1) to ensure comprehensive quality management framework is established, maintained, and continuously improved.
Establish quality policy and measurable quality objectives to guide QMS implementation and provide direction for quality improvement.
Define clear roles and responsibilities for QMS to ensure accountability and effective QMS implementation.
Plan and control AI system design and development per Article 17(1)(b) to ensure systematic design process with appropriate reviews, verification, and validation.
Define and document design inputs to ensure all requirements are captured, reviewed, and approved before design begins.
Define and verify design outputs meet design inputs to ensure design is complete, correct, and ready for development.
Conduct systematic design reviews at appropriate stages to ensure design quality, identify issues early, and enable informed decisions.
Verify design outputs meet design inputs to ensure design correctness and completeness before proceeding to next phase.
Validate AI system meets user needs and intended use to ensure system is fit for purpose before deployment.
Transfer design to development/production with appropriate controls to ensure design is correctly implemented.
Control and document design changes per Article 17(1)(a) to ensure changes are properly assessed, approved, and implemented.
Establish comprehensive quality assurance program per Article 17(1)(c) to ensure quality throughout AI system lifecycle.
Implement comprehensive testing strategy per Article 17(1)(d) to ensure AI systems are tested before, during, and after development.
Ensure QMS effectiveness through management review and drive continuous improvement to enhance QMS and AI system quality.
Conformity Assessment
Conformity assessment controls
Prepare complete technical documentation per Annex IV for Annex VI internal control conformity assessment to ensure all required documentation is available for compliance verification.
Verify AI system compliance with all EU AI Act requirements to ensure system meets all regulatory obligations before market placement.
Prepare conformity assessment report documenting compliance for Annex VI internal control procedure to provide evidence of conformity assessment completion.
Select and engage qualified notified body for Annex VII conformity assessment to ensure competent third-party assessment for Annex I product safety AI systems.
Prepare for notified body QMS assessment to ensure QMS is ready for third-party evaluation.
Prepare technical documentation for notified body review to ensure complete and accurate documentation is available for Annex VII assessment.
Support notified body during assessment and respond to findings to ensure successful Annex VII conformity assessment.
Prepare EU Declaration of Conformity with all required elements per Article 47 to provide formal declaration of compliance.
Review and approve EU Declaration of Conformity before issuance to ensure accuracy, completeness, and legal compliance.
Keep EU Declaration of Conformity available per Article 47(2) to ensure availability to competent authorities for 10 years.
Affix CE marking to high-risk AI system per Article 48 to indicate EU conformity.
Ensure CE marking complies with all rules per Article 48(2-5) to maintain regulatory compliance.
Maintain conformity throughout AI system lifecycle and reassess when substantial modifications occur to ensure ongoing compliance.
Registration
EU database registration controls
Gather and verify all required registration information per Article 49(1) to ensure complete and accurate registration data is available before submission to EU database.
Submit registration to EU database before market placement per Article 49 to ensure high-risk AI systems are registered before being placed on the market.
Update registration when changes occur to ensure registration information remains accurate and current throughout AI system lifecycle.
Post-Market Monitoring
Post-market monitoring controls
Establish post-market monitoring system per Article 72(1) to actively and systematically collect data, analyze performance in real-world conditions, identify risks and opportunities for improvement, and enable corrective actions.
Create and maintain post-market monitoring plan per Article 72(2) to define strategy, methods, and procedures for post-market monitoring.
Collect and manage performance data systematically to enable performance analysis and identify issues early.
Monitor AI system performance in real-world conditions and analyze trends to identify issues and opportunities for improvement.
Implement corrective actions based on post-market monitoring results to address performance issues and improve AI system quality.
Incident Management
Incident management and reporting controls
Detect incidents and classify them as serious or not per Article 73(1) to ensure serious incidents are identified promptly and reported to authorities.
Respond immediately to serious incidents and notify stakeholders per Article 73 to ensure rapid response and proper escalation.
Investigate serious incidents and prepare serious incident report per Article 73(2) to provide complete information to competent authorities.
Submit serious incident report to competent authority within 15 days per Article 73(2) to comply with regulatory reporting obligations.
Follow up on incidents and implement corrective actions to prevent recurrence and improve AI system safety.
Literacy & Training
AI literacy and training controls
Develop comprehensive AI literacy training curriculum with appropriate content for different roles and competency levels to ensure all staff have necessary AI knowledge and skills.
Deliver AI literacy training using appropriate methods to ensure all staff receive required training and achieve competency.
Track training completion and monitor training effectiveness to ensure all staff complete required training and training achieves its objectives.
Assess and verify AI competency to ensure staff have appropriate knowledge and skills for their roles.
Provide ongoing development and continuous learning opportunities to ensure staff maintain and enhance AI competency.