aicomply.
Lesson12 minChapter 1 of 14

Risk Management System

Article 9 requirements for continuous risk management.

Risk Management System (Article 9)

Learning Objectives

By the end of this chapter, you will be able to:

  • Design and implement a compliant AI risk management system
  • Apply the Article 9 iterative risk management process
  • Identify, analyse, and evaluate AI-specific risks
  • Select and implement appropriate risk mitigation measures
  • Integrate AI risk management with existing enterprise frameworks

Article 9 establishes the foundational compliance requirement for high-risk AI: a comprehensive, continuous risk management system. This is not a one-time assessment but an iterative process spanning the entire AI lifecycle—from conception through decommissioning.

The Article 9 Risk Management Framework

Article 9(2) Iterative Process

The AI Act requires a continuous iterative process comprising four steps as set out in Article 9(2)(a)-(d):

StepArticle 9(2)ActivitiesTiming
(a) Identification and analysis9(2)(a)Identify and analyse the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rightsPre-development, ongoing
(b) Estimation and evaluation9(2)(b)Estimate and evaluate the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misusePre-market, post-market
(c) Evaluation of post-market data9(2)(c)Evaluate other risks that may emerge based on the analysis of data gathered from the post-market monitoring system referred to in Article 72Post-deployment, continuous
(d) Adoption of risk management measures9(2)(d)Adopt appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a)Throughout lifecycle

Lifecycle Coverage Requirement

Article 9(1) mandates risk management "throughout the entire lifecycle":

Pre-Development Phase:

  • Intended use and context analysis
  • Initial risk identification
  • Risk management strategy design

Development Phase:

  • Training data risk assessment
  • Model behaviour analysis
  • Bias and fairness evaluation

Testing Phase:

  • Validation and verification risks
  • Performance boundary testing
  • Foreseeable misuse scenarios

Deployment Phase:

  • Real-world risk monitoring
  • Human oversight integration
  • Incident response planning

Post-Market Phase:

  • Continuous performance monitoring
  • Incident data analysis
  • Risk re-evaluation cycles

Types of Risks to Address

Health and Safety Risks (Article 9(2)(a))

Physical and mental health impacts including:

  • Direct harm from AI decisions (e.g., medical diagnosis errors)
  • Indirect harm from system failures (e.g., autonomous vehicle accidents)
  • Psychological impacts (e.g., unfair denial of services)

Fundamental Rights Risks (Article 9(2)(b))

Impacts on EU Charter rights including:

  • Non-discrimination (Article 21): Algorithmic bias
  • Privacy (Article 7): Excessive data collection
  • Fair trial (Article 47): Opaque decision-making
  • Human dignity (Article 1): Dehumanising treatment
  • Freedom of expression (Article 11): Content moderation bias

Contextual Risk Factors

The risk management system must consider a range of factors drawn from multiple paragraphs of Article 9. Article 9(4) requires that when assessing risk management measures, due consideration shall be given to the effects and possible interactions resulting from the combined application of the requirements set out in Chapter III, Section 2, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. Additionally, Article 9(9) requires that risk management measures give particular consideration to whether the high-risk AI system is likely to be accessed by or have an impact on persons under the age of 18 or, as appropriate, other vulnerable groups.

FactorSourceRisk Consideration
Intended purposeArt. 9(2)(a)-(b)What harm could occur during normal use?
Foreseeable misuseArt. 9(2)(b)What if users misuse the system?
System interactionsArt. 9(4)Effects and possible interactions from combined requirements
Vulnerable groups and persons under 18Art. 9(9)Are children, disabled, elderly especially affected?
Cumulative effectsArt. 9(2)What happens with repeated decisions over time?
Bias potentialArt. 10(2)(f)-(g)Could training data or design introduce discrimination?
Operational environmentArt. 9(2)(a)-(b)What contextual factors affect risk?

Compliance Note

You must assess risks for BOTH intended use AND reasonably foreseeable misuse. Many compliance failures stem from ignoring misuse scenarios.


Risk Identification Methods

Systematic Risk Identification Techniques

MethodDescriptionWhen to Use
HAZOP AnalysisStructured deviation analysisComplex AI systems
FMEAFailure modes and effects analysisComponent-level risks
Threat ModellingAdversarial scenario analysisSecurity risks
Bias AuditsFairness assessment across groupsDiscrimination risks
Red TeamingAdversarial testing by expertsPre-deployment
Stakeholder ConsultationInput from affected groupsRights impact assessment

AI-Specific Risk Categories

Technical Risks:

  • Model accuracy degradation
  • Data drift and distribution shift
  • Adversarial vulnerabilities
  • Unexplainable outputs

Operational Risks:

  • Human oversight failures
  • Integration errors
  • Misinterpretation of outputs
  • Automation bias

Ethical Risks:

  • Discriminatory outcomes
  • Privacy violations
  • Lack of transparency
  • Accountability gaps

Risk Evaluation Criteria

Risk Prioritisation Matrix

Evaluate each identified risk using severity and likelihood:

Likelihood / SeverityNegligibleMinorModerateMajorCritical
Almost CertainMediumHighHighCriticalCritical
LikelyLowMediumHighHighCritical
PossibleLowMediumMediumHighHigh
UnlikelyLowLowMediumMediumHigh
RareLowLowLowMediumMedium

Acceptability Thresholds

Article 9(5) requires that residual risk be acceptable considering:

  • State of the art in risk mitigation
  • Benefits of the AI system
  • Generally acknowledged standards
  • Expectations of intended users

Risk Mitigation Measures

Mitigation Strategy Hierarchy

Apply measures in priority order (Article 9(5)):

1. Elimination (preferred)

  • Remove the risk source entirely
  • Redesign to avoid risk scenarios

2. Reduction

  • Technical safeguards
  • Operational controls
  • User training

3. Transfer

  • Human-in-the-loop for high-stakes decisions
  • Insurance coverage
  • Contractual risk allocation

4. Acceptance (last resort)

  • Residual risk deemed acceptable
  • Must be documented and justified

Common AI Risk Mitigation Measures

Risk TypeMitigation Measures
Accuracy risksValidation testing, confidence thresholds, fallback rules
Bias risksDiverse training data, bias testing, fairness constraints
Robustness risksAdversarial training, input validation, anomaly detection
Transparency risksExplainability methods, audit trails, user disclosure
Oversight risksHuman review triggers, override mechanisms, escalation paths

Documentation Requirements

Risk Management Documentation

Your risk management system must document:

ElementRequired Content
Risk registerAll identified risks with classifications
Assessment methodologyHow risks were identified and evaluated
Mitigation measuresActions taken and their effectiveness
Residual risksRemaining risks after mitigation
Review scheduleWhen re-assessments occur
ResponsibilitiesWho is accountable for risk management

Expert Insight

Integrate AI risk management with your existing enterprise risk management (ERM) framework. This ensures consistency, leverages existing governance, and avoids duplication.


Integration with Other Requirements

Relationship to Other Article 9 Elements

RequirementIntegration Point
Data Governance (Art. 10)Data quality risks feed into risk assessment
Technical Documentation (Art. 11)Risk documentation is mandatory content
Logging (Art. 12)Logs enable post-market risk monitoring
Human Oversight (Art. 14)Oversight is a key risk mitigation measure
Post-Market Monitoring (Art. 72)Continuous risk re-evaluation

Risk Management Compliance Checklist

  • Risk management system established and documented
  • Lifecycle-spanning process implemented
  • All risk types identified (health, safety, rights)
  • Foreseeable misuse scenarios assessed
  • Vulnerable group impacts considered
  • Risk evaluation criteria defined
  • Mitigation measures implemented
  • Residual risks documented and justified
  • Regular review cycle established
  • Integration with ERM complete

What You Learned

Key concepts from this chapter

Risk management must be **continuous and iterative** throughout the AI lifecycle

Address risks to **health, safety, AND fundamental rights**—not just technical risks

Consider **both intended use and foreseeable misuse** scenarios

Apply **mitigation hierarchy**: eliminate > reduce > transfer > accept

**Document everything**—risk management is a core element of technical documentation

Chapter Complete

High-Risk AI Compliance

1/14

chapters