Risk Classification Framework
Understanding the four-tier risk classification system.
Learning Objectives
By the end of this chapter, you will be able to:
- Apply the AI Act's risk classification methodology to any AI system
- Distinguish between the two pathways to high-risk classification (Article 6)
- Navigate Annex I (product safety) and Annex III (standalone) high-risk categories
- Understand the "filter" exception that can downgrade Annex III systems
- Conduct a preliminary risk classification assessment
The AI Act's risk-based approach is its regulatory cornerstone. Classification determines whether your AI system is prohibited, subject to extensive requirements, requires transparency only, or faces no mandatory obligations. This chapter provides a systematic methodology for classification.
The Risk Classification Methodology
Classification follows a sequential assessment through four risk tiers:
Risk Classification Decision Flow
Is it a prohibited practice?
Listed in Annex III categories?
Requires transparency?
No specific requirements
Step 1: Is it Prohibited? (Article 5)
First, determine if the AI practice falls under the eight prohibited categories. If yes, the AI cannot be legally deployed in the EU—stop here.
Step 2: Is it High-Risk? (Article 6)
If not prohibited, assess whether the AI meets either high-risk pathway:
- Pathway A: Safety component in Annex I product requiring third-party conformity assessment
- Pathway B: Standalone AI in an Annex III use case (subject to "filter" exception)
Step 3: Does it Require Transparency? (Article 50)
If not high-risk, check transparency triggers:
- Direct human interaction (chatbots)
- Synthetic content generation (deepfakes)
- Emotion recognition or biometric categorisation (permitted contexts)
Step 4: Minimal Risk
If none of the above apply, the system is minimal risk with no mandatory requirements.
Classification Decision Flowchart
Is it a prohibited practice (Art. 5)?
Cannot deploy
Is it a safety component in Annex I product?
Is it an Annex III use case?
Does filter exception apply?
Does Art. 50 transparency apply?
High-Risk Pathway A: Annex I Product Safety (Article 6(1))
AI systems are high-risk under Article 6(1) if they meet BOTH conditions:
- The AI is a safety component of a product, or is itself a product, covered by EU harmonisation legislation listed in Annex I
- The product requires third-party conformity assessment under that legislation
Annex I Sectors (Product Safety Legislation)
| Sector | EU Legislation | Example AI Applications |
|---|---|---|
| Machinery | Directive 2006/42/EC (Machinery Directive; note: Regulation (EU) 2023/1230 replaces this Directive from January 2027 — Annex I amendment expected) | Industrial robots, autonomous vehicles |
| Toys | Directive 2009/48/EC | AI-enabled interactive toys |
| Recreational Craft | Directive 2013/53/EU | Autonomous navigation systems |
| Lifts | Directive 2014/33/EU | AI-controlled elevator systems |
| Equipment in Explosive Atmospheres | Directive 2014/34/EU | Predictive maintenance AI |
| Radio Equipment | Directive 2014/53/EU | AI in wireless devices |
| Pressure Equipment | Directive 2014/68/EU | AI monitoring systems |
| Cableways | Regulation (EU) 2016/424 | Autonomous operation AI |
| Personal Protective Equipment | Regulation (EU) 2016/425 | Smart safety equipment |
| Gas Appliances | Regulation (EU) 2016/426 | AI combustion control |
| Medical Devices | Regulation (EU) 2017/745 | AI diagnostic software |
| In-Vitro Diagnostics | Regulation (EU) 2017/746 | AI analysis systems |
| Civil Aviation | Regulation (EU) 2018/1139 | Autopilot, air traffic AI |
| Motor Vehicles | Regulation (EU) 2019/2144 | ADAS, autonomous driving |
| Agricultural Vehicles | Regulation (EU) 167/2013 | Autonomous tractors |
| Rail Systems | Directive (EU) 2016/797 | Train control AI |
Expert Insight
The Annex I pathway primarily captures AI embedded in physical products already subject to EU safety regulation. The AI Act adds AI-specific requirements on top of existing product safety obligations.
High-Risk Pathway B: Annex III Use Cases (Article 6(2))
AI systems are high-risk under Article 6(2) if they fall within the use cases enumerated in Annex III—regardless of the product in which they are deployed.
Annex III: The Eight High-Risk Domains
Section 1: Biometrics (where permitted)
- Remote biometric identification systems
- Biometric categorisation systems
- Emotion recognition systems (non-prohibited contexts)
Section 2: Critical Infrastructure
- AI safety components in management/operation of:
- Road traffic
- Water, gas, heating, electricity supply
- Digital infrastructure
Section 3: Education and Vocational Training
- AI determining access to education institutions
- AI evaluating learning outcomes
- AI assessing appropriate education level
- AI monitoring prohibited behaviour during tests
Section 4: Employment, Workers Management, Self-Employment Access
- Recruitment and selection (CV screening, interviews)
- Promotion, termination, task allocation decisions
- Performance and behaviour monitoring
Section 5: Access to Essential Services
- Evaluation of eligibility for essential public assistance benefits and services, including healthcare (by public authorities)
- Credit worthiness evaluation (individuals)
- Life and health insurance risk assessment (individuals)
- Emergency services dispatch prioritisation
Section 6: Law Enforcement (where permitted)
- Victim risk assessment (risk of becoming victim of criminal offences)
- Polygraphs and similar tools
- Evidence reliability assessment
- Offence risk assessment (profiling exception)
Section 7: Migration, Asylum, Border Control
- Polygraphs and similar tools
- Immigration/asylum/visa application risk assessment
- Document authenticity verification
- Visa/permit/complaint examination assistance
Section 8: Administration of Justice and Democracy
- Judicial fact and law research assistance
- AI with potential to influence electoral outcomes
The "Filter" Exception (Article 6(3))
A critical nuance: Annex III AI systems are NOT automatically high-risk. Article 6(3) provides a "filter" exception:
An Annex III AI system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, taking into account:
| Filter Criterion | Meaning |
|---|---|
| Narrow procedural task | Performs very specific, limited function |
| Improves prior human activity | Enhances rather than replaces human decision |
| Preparatory nature | Prepares for human decision, doesn't make it |
| Pattern detection only | AI detects decision-making patterns or deviations from prior decision-making patterns but is NOT meant to replace or influence the previously completed human assessment, without proper human review |
| No significant risk of harm | Overarching threshold: the AI system does not pose a significant risk of harm to health, safety, or fundamental rights (this is the general test under which the above criteria are assessed) |
⚠️ Compliance Warning: The filter exception is narrow and must be justified. Providers must document their filter analysis. If uncertain, treat the system as high-risk.
⚠️ Art. 6(4) Registration Obligation: Providers who consider an Annex III system is NOT high-risk must (a) document their assessment before placing on market or putting into service, and (b) register under Article 49(2). This is a mandatory obligation — providers invoking Art. 6(3) must register, not merely document. Documentation must be made available to national competent authorities on request.
Filter Exception Does NOT Apply To:
- AI systems performing profiling of natural persons (always high-risk per Article 6(3))
- AI used for profiling of natural persons
Commission Guidelines on Classification (Article 6(5))
Article 6(5) required the Commission to publish guidelines specifying the practical implementation of Article 6, including a comprehensive list of high-risk and not-high-risk use case examples, by 2 February 2026. These guidelines (once published) are a key reference for applying the Art. 6(3) filter exception and should be consulted when making classification determinations. Providers should monitor the Official Journal and the European AI Office for publication of these guidelines.
Limited Risk: Transparency Obligations (Article 50)
AI systems that are not prohibited or high-risk may still have transparency obligations:
| Trigger | Obligation |
|---|---|
| AI interacting directly with humans | Inform that interacting with AI (unless obvious) |
| Emotion recognition or biometric categorisation | Inform affected persons, GDPR applies |
| Synthetic content generation (deepfakes) | Disclose AI-generated content |
| Text published for public information | Disclose AI generation (exceptions for editorial process) |
Minimal Risk: Voluntary Measures
The majority of AI systems fall into minimal risk with no mandatory requirements:
- Spam filters
- AI-enabled video games
- Inventory management AI
- General recommendation systems
Providers may voluntarily adopt codes of conduct (Article 95) applying high-risk requirements.
Risk Classification Quick Assessment
Use this checklist for preliminary classification:
| Question | Yes | No |
|---|---|---|
| Is this a prohibited practice under Art. 5? | STOP - Prohibited | Continue |
| Is this AI a safety component in an Annex I product requiring 3rd party assessment? | High-Risk (Pathway A) | Continue |
| Is this AI in an Annex III use case? | Check filter exception | Continue |
| Does the filter exception apply AND AI is not biometric/profiling? | Limited/Minimal Risk | High-Risk (Pathway B) |
| Does Art. 50 transparency apply? | Limited Risk | Minimal Risk |
What You Learned
Key concepts from this chapter
The AI Act uses a **four-tier risk classification**: prohibited, high-risk, limited risk, minimal risk
**Two pathways** lead to high-risk: Annex I (product safety) and Annex III (use cases)
Annex III systems may escape high-risk through the **"filter" exception** if they pose no significant risk
The filter **does not apply** to biometric and profiling AI—these remain high-risk
**Transparency obligations** (Article 50) apply to direct interaction, emotion recognition, and synthetic content