aicomply.
Lesson12 minChapter 4 of 9

Risk Classification Framework

Understanding the four-tier risk classification system.

Learning Objectives

By the end of this chapter, you will be able to:

  • Apply the AI Act's risk classification methodology to any AI system
  • Distinguish between the two pathways to high-risk classification (Article 6)
  • Navigate Annex I (product safety) and Annex III (standalone) high-risk categories
  • Understand the "filter" exception that can downgrade Annex III systems
  • Conduct a preliminary risk classification assessment

The AI Act's risk-based approach is its regulatory cornerstone. Classification determines whether your AI system is prohibited, subject to extensive requirements, requires transparency only, or faces no mandatory obligations. This chapter provides a systematic methodology for classification.

The Risk Classification Methodology

Classification follows a sequential assessment through four risk tiers:

Risk Classification Decision Flow

Start Assessment

Is it a prohibited practice?

PROHIBITED
No

Listed in Annex III categories?

HIGH RISK
No

Requires transparency?

LIMITED RISK
No
MINIMAL RISK

No specific requirements

Step 1: Is it Prohibited? (Article 5)

First, determine if the AI practice falls under the eight prohibited categories. If yes, the AI cannot be legally deployed in the EU—stop here.

Step 2: Is it High-Risk? (Article 6)

If not prohibited, assess whether the AI meets either high-risk pathway:

  • Pathway A: Safety component in Annex I product requiring third-party conformity assessment
  • Pathway B: Standalone AI in an Annex III use case (subject to "filter" exception)

Step 3: Does it Require Transparency? (Article 50)

If not high-risk, check transparency triggers:

  • Direct human interaction (chatbots)
  • Synthetic content generation (deepfakes)
  • Emotion recognition or biometric categorisation (permitted contexts)

Step 4: Minimal Risk

If none of the above apply, the system is minimal risk with no mandatory requirements.

Classification Decision Flowchart

1

Is it a prohibited practice (Art. 5)?

YES →
PROHIBITED

Cannot deploy

NO →
Proceed to Step 2
2a

Is it a safety component in Annex I product?

YES →
HIGH-RISK
NO →
Proceed to Step 2b
2b

Is it an Annex III use case?

YES →
Check filter exception
NO →
Proceed to Step 3
2c

Does filter exception apply?

YES →
Limited or Minimal Risk
NO →
HIGH-RISK
3

Does Art. 50 transparency apply?

YES →
LIMITED RISK
NO →
MINIMAL RISK
Prohibited
High-Risk
Limited Risk
Minimal Risk

High-Risk Pathway A: Annex I Product Safety (Article 6(1))

AI systems are high-risk under Article 6(1) if they meet BOTH conditions:

  1. The AI is a safety component of a product, or is itself a product, covered by EU harmonisation legislation listed in Annex I
  2. The product requires third-party conformity assessment under that legislation

Annex I Sectors (Product Safety Legislation)

SectorEU LegislationExample AI Applications
MachineryDirective 2006/42/EC (Machinery Directive; note: Regulation (EU) 2023/1230 replaces this Directive from January 2027 — Annex I amendment expected)Industrial robots, autonomous vehicles
ToysDirective 2009/48/ECAI-enabled interactive toys
Recreational CraftDirective 2013/53/EUAutonomous navigation systems
LiftsDirective 2014/33/EUAI-controlled elevator systems
Equipment in Explosive AtmospheresDirective 2014/34/EUPredictive maintenance AI
Radio EquipmentDirective 2014/53/EUAI in wireless devices
Pressure EquipmentDirective 2014/68/EUAI monitoring systems
CablewaysRegulation (EU) 2016/424Autonomous operation AI
Personal Protective EquipmentRegulation (EU) 2016/425Smart safety equipment
Gas AppliancesRegulation (EU) 2016/426AI combustion control
Medical DevicesRegulation (EU) 2017/745AI diagnostic software
In-Vitro DiagnosticsRegulation (EU) 2017/746AI analysis systems
Civil AviationRegulation (EU) 2018/1139Autopilot, air traffic AI
Motor VehiclesRegulation (EU) 2019/2144ADAS, autonomous driving
Agricultural VehiclesRegulation (EU) 167/2013Autonomous tractors
Rail SystemsDirective (EU) 2016/797Train control AI

Expert Insight

The Annex I pathway primarily captures AI embedded in physical products already subject to EU safety regulation. The AI Act adds AI-specific requirements on top of existing product safety obligations.

High-Risk Pathway B: Annex III Use Cases (Article 6(2))

AI systems are high-risk under Article 6(2) if they fall within the use cases enumerated in Annex III—regardless of the product in which they are deployed.

Annex III: The Eight High-Risk Domains

Section 1: Biometrics (where permitted)

  • Remote biometric identification systems
  • Biometric categorisation systems
  • Emotion recognition systems (non-prohibited contexts)

Section 2: Critical Infrastructure

  • AI safety components in management/operation of:
    • Road traffic
    • Water, gas, heating, electricity supply
    • Digital infrastructure

Section 3: Education and Vocational Training

  • AI determining access to education institutions
  • AI evaluating learning outcomes
  • AI assessing appropriate education level
  • AI monitoring prohibited behaviour during tests

Section 4: Employment, Workers Management, Self-Employment Access

  • Recruitment and selection (CV screening, interviews)
  • Promotion, termination, task allocation decisions
  • Performance and behaviour monitoring

Section 5: Access to Essential Services

  • Evaluation of eligibility for essential public assistance benefits and services, including healthcare (by public authorities)
  • Credit worthiness evaluation (individuals)
  • Life and health insurance risk assessment (individuals)
  • Emergency services dispatch prioritisation

Section 6: Law Enforcement (where permitted)

  • Victim risk assessment (risk of becoming victim of criminal offences)
  • Polygraphs and similar tools
  • Evidence reliability assessment
  • Offence risk assessment (profiling exception)

Section 7: Migration, Asylum, Border Control

  • Polygraphs and similar tools
  • Immigration/asylum/visa application risk assessment
  • Document authenticity verification
  • Visa/permit/complaint examination assistance

Section 8: Administration of Justice and Democracy

  • Judicial fact and law research assistance
  • AI with potential to influence electoral outcomes

The "Filter" Exception (Article 6(3))

A critical nuance: Annex III AI systems are NOT automatically high-risk. Article 6(3) provides a "filter" exception:

An Annex III AI system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, taking into account:

Filter CriterionMeaning
Narrow procedural taskPerforms very specific, limited function
Improves prior human activityEnhances rather than replaces human decision
Preparatory naturePrepares for human decision, doesn't make it
Pattern detection onlyAI detects decision-making patterns or deviations from prior decision-making patterns but is NOT meant to replace or influence the previously completed human assessment, without proper human review
No significant risk of harmOverarching threshold: the AI system does not pose a significant risk of harm to health, safety, or fundamental rights (this is the general test under which the above criteria are assessed)

⚠️ Compliance Warning: The filter exception is narrow and must be justified. Providers must document their filter analysis. If uncertain, treat the system as high-risk.

⚠️ Art. 6(4) Registration Obligation: Providers who consider an Annex III system is NOT high-risk must (a) document their assessment before placing on market or putting into service, and (b) register under Article 49(2). This is a mandatory obligation — providers invoking Art. 6(3) must register, not merely document. Documentation must be made available to national competent authorities on request.

Filter Exception Does NOT Apply To:

  • AI systems performing profiling of natural persons (always high-risk per Article 6(3))
  • AI used for profiling of natural persons

Commission Guidelines on Classification (Article 6(5))

Article 6(5) required the Commission to publish guidelines specifying the practical implementation of Article 6, including a comprehensive list of high-risk and not-high-risk use case examples, by 2 February 2026. These guidelines (once published) are a key reference for applying the Art. 6(3) filter exception and should be consulted when making classification determinations. Providers should monitor the Official Journal and the European AI Office for publication of these guidelines.

Limited Risk: Transparency Obligations (Article 50)

AI systems that are not prohibited or high-risk may still have transparency obligations:

TriggerObligation
AI interacting directly with humansInform that interacting with AI (unless obvious)
Emotion recognition or biometric categorisationInform affected persons, GDPR applies
Synthetic content generation (deepfakes)Disclose AI-generated content
Text published for public informationDisclose AI generation (exceptions for editorial process)

Minimal Risk: Voluntary Measures

The majority of AI systems fall into minimal risk with no mandatory requirements:

  • Spam filters
  • AI-enabled video games
  • Inventory management AI
  • General recommendation systems

Providers may voluntarily adopt codes of conduct (Article 95) applying high-risk requirements.

Risk Classification Quick Assessment

Use this checklist for preliminary classification:

QuestionYesNo
Is this a prohibited practice under Art. 5?STOP - ProhibitedContinue
Is this AI a safety component in an Annex I product requiring 3rd party assessment?High-Risk (Pathway A)Continue
Is this AI in an Annex III use case?Check filter exceptionContinue
Does the filter exception apply AND AI is not biometric/profiling?Limited/Minimal RiskHigh-Risk (Pathway B)
Does Art. 50 transparency apply?Limited RiskMinimal Risk

What You Learned

Key concepts from this chapter

The AI Act uses a **four-tier risk classification**: prohibited, high-risk, limited risk, minimal risk

**Two pathways** lead to high-risk: Annex I (product safety) and Annex III (use cases)

Annex III systems may escape high-risk through the **"filter" exception** if they pose no significant risk

The filter **does not apply** to biometric and profiling AI—these remain high-risk

**Transparency obligations** (Article 50) apply to direct interaction, emotion recognition, and synthetic content

Chapter Complete

AI Act Fundamentals

4/9

chapters