Risk Management System
Article 9 requirements for continuous risk management.
Risk Management System (Article 9)
Learning Objectives
By the end of this chapter, you will be able to:
- Design and implement a compliant AI risk management system
- Apply the Article 9 iterative risk management process
- Identify, analyse, and evaluate AI-specific risks
- Select and implement appropriate risk mitigation measures
- Integrate AI risk management with existing enterprise frameworks
Article 9 establishes the foundational compliance requirement for high-risk AI: a comprehensive, continuous risk management system. This is not a one-time assessment but an iterative process spanning the entire AI lifecycle—from conception through decommissioning.
The Article 9 Risk Management Framework
Article 9(2) Iterative Process
The AI Act requires a continuous iterative process comprising four steps as set out in Article 9(2)(a)-(d):
| Step | Article 9(2) | Activities | Timing |
|---|---|---|---|
| (a) Identification and analysis | 9(2)(a) | Identify and analyse the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights | Pre-development, ongoing |
| (b) Estimation and evaluation | 9(2)(b) | Estimate and evaluate the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse | Pre-market, post-market |
| (c) Evaluation of post-market data | 9(2)(c) | Evaluate other risks that may emerge based on the analysis of data gathered from the post-market monitoring system referred to in Article 72 | Post-deployment, continuous |
| (d) Adoption of risk management measures | 9(2)(d) | Adopt appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a) | Throughout lifecycle |
Lifecycle Coverage Requirement
Article 9(1) mandates risk management "throughout the entire lifecycle":
Pre-Development Phase:
- Intended use and context analysis
- Initial risk identification
- Risk management strategy design
Development Phase:
- Training data risk assessment
- Model behaviour analysis
- Bias and fairness evaluation
Testing Phase:
- Validation and verification risks
- Performance boundary testing
- Foreseeable misuse scenarios
Deployment Phase:
- Real-world risk monitoring
- Human oversight integration
- Incident response planning
Post-Market Phase:
- Continuous performance monitoring
- Incident data analysis
- Risk re-evaluation cycles
Types of Risks to Address
Health and Safety Risks (Article 9(2)(a))
Physical and mental health impacts including:
- Direct harm from AI decisions (e.g., medical diagnosis errors)
- Indirect harm from system failures (e.g., autonomous vehicle accidents)
- Psychological impacts (e.g., unfair denial of services)
Fundamental Rights Risks (Article 9(2)(b))
Impacts on EU Charter rights including:
- Non-discrimination (Article 21): Algorithmic bias
- Privacy (Article 7): Excessive data collection
- Fair trial (Article 47): Opaque decision-making
- Human dignity (Article 1): Dehumanising treatment
- Freedom of expression (Article 11): Content moderation bias
Contextual Risk Factors
The risk management system must consider a range of factors drawn from multiple paragraphs of Article 9. Article 9(4) requires that when assessing risk management measures, due consideration shall be given to the effects and possible interactions resulting from the combined application of the requirements set out in Chapter III, Section 2, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. Additionally, Article 9(9) requires that risk management measures give particular consideration to whether the high-risk AI system is likely to be accessed by or have an impact on persons under the age of 18 or, as appropriate, other vulnerable groups.
| Factor | Source | Risk Consideration |
|---|---|---|
| Intended purpose | Art. 9(2)(a)-(b) | What harm could occur during normal use? |
| Foreseeable misuse | Art. 9(2)(b) | What if users misuse the system? |
| System interactions | Art. 9(4) | Effects and possible interactions from combined requirements |
| Vulnerable groups and persons under 18 | Art. 9(9) | Are children, disabled, elderly especially affected? |
| Cumulative effects | Art. 9(2) | What happens with repeated decisions over time? |
| Bias potential | Art. 10(2)(f)-(g) | Could training data or design introduce discrimination? |
| Operational environment | Art. 9(2)(a)-(b) | What contextual factors affect risk? |
Compliance Note
You must assess risks for BOTH intended use AND reasonably foreseeable misuse. Many compliance failures stem from ignoring misuse scenarios.
Risk Identification Methods
Systematic Risk Identification Techniques
| Method | Description | When to Use |
|---|---|---|
| HAZOP Analysis | Structured deviation analysis | Complex AI systems |
| FMEA | Failure modes and effects analysis | Component-level risks |
| Threat Modelling | Adversarial scenario analysis | Security risks |
| Bias Audits | Fairness assessment across groups | Discrimination risks |
| Red Teaming | Adversarial testing by experts | Pre-deployment |
| Stakeholder Consultation | Input from affected groups | Rights impact assessment |
AI-Specific Risk Categories
Technical Risks:
- Model accuracy degradation
- Data drift and distribution shift
- Adversarial vulnerabilities
- Unexplainable outputs
Operational Risks:
- Human oversight failures
- Integration errors
- Misinterpretation of outputs
- Automation bias
Ethical Risks:
- Discriminatory outcomes
- Privacy violations
- Lack of transparency
- Accountability gaps
Risk Evaluation Criteria
Risk Prioritisation Matrix
Evaluate each identified risk using severity and likelihood:
| Likelihood / Severity | Negligible | Minor | Moderate | Major | Critical |
|---|---|---|---|---|---|
| Almost Certain | Medium | High | High | Critical | Critical |
| Likely | Low | Medium | High | High | Critical |
| Possible | Low | Medium | Medium | High | High |
| Unlikely | Low | Low | Medium | Medium | High |
| Rare | Low | Low | Low | Medium | Medium |
Acceptability Thresholds
Article 9(5) requires that residual risk be acceptable considering:
- State of the art in risk mitigation
- Benefits of the AI system
- Generally acknowledged standards
- Expectations of intended users
Risk Mitigation Measures
Mitigation Strategy Hierarchy
Apply measures in priority order (Article 9(5)):
1. Elimination (preferred)
- Remove the risk source entirely
- Redesign to avoid risk scenarios
2. Reduction
- Technical safeguards
- Operational controls
- User training
3. Transfer
- Human-in-the-loop for high-stakes decisions
- Insurance coverage
- Contractual risk allocation
4. Acceptance (last resort)
- Residual risk deemed acceptable
- Must be documented and justified
Common AI Risk Mitigation Measures
| Risk Type | Mitigation Measures |
|---|---|
| Accuracy risks | Validation testing, confidence thresholds, fallback rules |
| Bias risks | Diverse training data, bias testing, fairness constraints |
| Robustness risks | Adversarial training, input validation, anomaly detection |
| Transparency risks | Explainability methods, audit trails, user disclosure |
| Oversight risks | Human review triggers, override mechanisms, escalation paths |
Documentation Requirements
Risk Management Documentation
Your risk management system must document:
| Element | Required Content |
|---|---|
| Risk register | All identified risks with classifications |
| Assessment methodology | How risks were identified and evaluated |
| Mitigation measures | Actions taken and their effectiveness |
| Residual risks | Remaining risks after mitigation |
| Review schedule | When re-assessments occur |
| Responsibilities | Who is accountable for risk management |
Expert Insight
Integrate AI risk management with your existing enterprise risk management (ERM) framework. This ensures consistency, leverages existing governance, and avoids duplication.
Integration with Other Requirements
Relationship to Other Article 9 Elements
| Requirement | Integration Point |
|---|---|
| Data Governance (Art. 10) | Data quality risks feed into risk assessment |
| Technical Documentation (Art. 11) | Risk documentation is mandatory content |
| Logging (Art. 12) | Logs enable post-market risk monitoring |
| Human Oversight (Art. 14) | Oversight is a key risk mitigation measure |
| Post-Market Monitoring (Art. 72) | Continuous risk re-evaluation |
Risk Management Compliance Checklist
- Risk management system established and documented
- Lifecycle-spanning process implemented
- All risk types identified (health, safety, rights)
- Foreseeable misuse scenarios assessed
- Vulnerable group impacts considered
- Risk evaluation criteria defined
- Mitigation measures implemented
- Residual risks documented and justified
- Regular review cycle established
- Integration with ERM complete
What You Learned
Key concepts from this chapter
Risk management must be **continuous and iterative** throughout the AI lifecycle
Address risks to **health, safety, AND fundamental rights**—not just technical risks
Consider **both intended use and foreseeable misuse** scenarios
Apply **mitigation hierarchy**: eliminate > reduce > transfer > accept
**Document everything**—risk management is a core element of technical documentation