Practical Exercise: Risk Classification
Apply your knowledge to real-world AI system classification scenarios.
Learning Objectives
By the end of this exercise, you will be able to:
- Apply the AI Act's risk classification framework to real-world scenarios
- Identify prohibited, high-risk, limited risk, and minimal risk AI systems
- Reference correct articles and annexes for each classification
- Determine key compliance obligations for high-risk systems
- Document and justify classification decisions
This practical exercise helps you apply your knowledge of the AI Act's risk classification framework to realistic scenarios. Classification is the foundational compliance activity—it determines all subsequent obligations.
Classification Methodology
Step-by-Step Approach
For each AI system, follow this systematic analysis:
| Step | Question | Action |
|---|---|---|
| 1 | Is this an AI system under Article 3(1)? | Confirm machine-based, autonomy, outputs influencing decisions |
| 2 | Is it prohibited under Article 5? | Check all eight prohibition categories |
| 3 | Is it listed in Annex III? | Check all eight high-risk domains |
| 4 | Is it a safety component in Annex I products? | Check sectoral legislation |
| 5 | Does Article 6(3) exception apply? | Check profiling, narrow scope, human oversight exceptions |
| 6 | Does it have transparency obligations? | Check Article 50 categories |
| 7 | Document conclusion | Record reasoning and evidence |
Scenario 1: HR Screening Tool
The Situation
A multinational company deploys an AI system to screen incoming job applications. The system:
- Analyses CVs and cover letters
- Assigns scores to candidates based on predicted job fit
- Filters out candidates below a threshold score
- Ranks remaining candidates for human review
Classification Analysis
Step 1 - AI System? Yes. Machine-based system processing inputs (CVs), generating outputs (scores/rankings) influencing hiring decisions.
Step 2 - Prohibited? No. Not subliminal manipulation, exploitation of vulnerabilities, social scoring, or other Article 5 categories.
Step 3 - Annex III High-Risk?
| Annex III Check | Result |
|---|---|
| Section 4(a): Recruitment/selection | Yes - Explicit match |
| "AI intended to be used for recruitment or selection" | Directly applies |
| "Filtering applications, evaluating candidates" | Matches exactly |
Step 4 - Safety Component? No. Not a product safety component.
Step 5 - Article 6(3) Exception?
| Exception Criterion | Analysis | Result |
|---|---|---|
| Intended for narrow procedural task? | No - substantive candidate evaluation | Not applicable |
| Intended to improve previous human assessment? | No - provides initial screening | Not applicable |
| Intended to detect decision patterns? | No - makes substantive recommendations | Not applicable |
| Human review always before action? | Partially - but AI filters out candidates | Not sufficient |
Classification: HIGH-RISK (Annex III, Section 4(a))
Compliance Obligations
| Requirement | Article | Key Actions |
|---|---|---|
| Risk management | Article 9 | Assess bias, discrimination, accuracy risks |
| Data governance | Article 10 | Training data quality, representativeness |
| Technical documentation | Article 11 | Complete Annex IV file |
| Transparency | Article 13 | Instructions for HR teams |
| Human oversight | Article 14 | HR professional final decisions |
| Accuracy | Article 15 | Validate performance across demographics |
| Conformity assessment | Article 43 | Internal control (Annex VI) |
| Registration | Article 49 | EU database registration |
⚠️ Red Flag: If the system automatically rejects candidates without any human review, this could exacerbate compliance risks. Ensure meaningful human oversight for rejection decisions.
Scenario 2: Customer Service Chatbot
The Situation
An e-commerce company deploys an AI chatbot that:
- Answers customer questions about products
- Processes returns and complaints
- Escalates complex issues to human agents
- Uses a conversational interface
Classification Analysis
Step 1 - AI System? Yes. Uses natural language processing to generate responses influencing customer interactions.
Step 2 - Prohibited? No. Not in any Article 5 category.
Step 3 - Annex III High-Risk?
| Annex III Category | Applies? |
|---|---|
| Section 1: Biometrics | No |
| Section 2: Critical infrastructure | No |
| Section 3: Education | No |
| Section 4: Employment | No |
| Section 5: Essential services | Not credit, not insurance, not benefits |
| Section 6: Law enforcement | No |
| Section 7: Migration/asylum | No |
| Section 8: Justice | No |
Step 4 - Safety Component? No.
Step 5 - Limited Risk (Article 50)?
| Article 50 Check | Result |
|---|---|
| 50(1): AI system designed for direct interaction? | Yes |
| Transparency obligation triggered? | Yes |
Classification: LIMITED RISK (Article 50)
Compliance Obligations
| Requirement | Implementation |
|---|---|
| Transparency (Article 50(1)) | Clearly inform users they are interacting with an AI system |
| Disclosure timing | Before or at the beginning of interaction |
| Exception | Not required if obvious from context |
Implementation Examples:
- "Hi! I'm an AI assistant. How can I help you today?"
- Banner stating "You are chatting with our AI customer service bot"
- Clear indication in interface that this is automated assistance
💡 Best Practice: Even though not high-risk, document the chatbot's capabilities, limitations, and escalation procedures. This demonstrates responsible AI practices and aids future compliance assessments.
Scenario 3: Emotion Recognition at Work
The Situation
A manufacturing company proposes installing cameras with AI that:
- Monitors workers' facial expressions throughout their shift
- Infers emotional states (stress, fatigue, happiness)
- Alerts supervisors when workers appear stressed
- Stated purpose: "improving worker wellbeing"
Classification Analysis
Step 2 - Prohibited?
| Article 5(1)(f) Check | Analysis |
|---|---|
| "AI systems that infer emotions" | Yes - inferring stress, fatigue, happiness |
| "Of a natural person" | Yes - workers |
| "In the workplace" | Yes - manufacturing facility |
| "Educational institution" | N/A |
Exception Analysis:
| Exception | Applies? |
|---|---|
| Medical reasons | No - "wellbeing" is not medical treatment |
| Safety reasons | Potentially - if fatigue monitoring prevents accidents |
Classification: PROHIBITED (Article 5(1)(f)) — unless the company can demonstrate a genuine safety purpose.
Critical Analysis
If the company argues safety exception:
| Criterion | Assessment |
|---|---|
| Primary purpose safety? | Must be accident prevention, not productivity |
| Proportionality | Least intrusive means to achieve safety? |
| Worker consent | Informed consent present? |
| Alternative measures | Other safety measures considered? |
Compliance Note
"Improving wellbeing" is explicitly **not** a valid exception. The company would need to demonstrate:
- Documented safety risks from worker fatigue
- This system is necessary for safety (not just helpful)
- Less intrusive alternatives are inadequate
- The system is limited to genuine safety monitoring
If Safety Exception Does Not Apply:
- Deployment is prohibited
- No compliance pathway exists
- Penalties up to €35 million or 7% global turnover
Scenario 4: Credit Scoring AI
The Situation
A retail bank deploys an AI system that:
- Evaluates loan applications
- Analyses applicant data (income, employment, credit history)
- Generates creditworthiness scores
- Recommends approval, denial, or conditions
Classification Analysis
Step 3 - Annex III High-Risk?
| Annex III Section 5(b) Check | Result |
|---|---|
| "AI intended to be used to evaluate creditworthiness" | Explicit match |
| "Establish the credit score of natural persons" | Explicit match |
| "Exception: detecting financial fraud" | Does not apply - this is credit scoring |
Classification: HIGH-RISK (Annex III, Section 5(b))
Enhanced Compliance Considerations
| Requirement | Specific Considerations for Credit Scoring |
|---|---|
| Risk management | Discrimination risks across protected characteristics |
| Data governance | Training data representativeness across demographics |
| Transparency | Clear explanation of factors influencing decisions |
| Human oversight | Human review for marginal cases, appeals process |
| Accuracy | Validate predictions across different applicant groups |
| GDPR Article 22 | Right not to be subject to solely automated decisions |
| Consumer credit law | Sector-specific obligations may apply |
💡 Expert Note: Credit scoring AI faces heightened scrutiny for discrimination. Ensure robust fairness testing across age, gender, race, and other protected characteristics. Document all bias mitigation measures.
Scenario 5: Medical Diagnostic AI
The Situation
A hospital deploys an AI system that:
- Analyses medical imaging (X-rays, MRIs)
- Identifies potential abnormalities
- Suggests diagnoses to radiologists
- Flags urgent cases for priority review
Classification Analysis
Step 4 - Safety Component in Annex I Product?
| Analysis | Result |
|---|---|
| Medical Device Regulation (EU) 2017/745 | Applies |
| Is this a medical device? | Yes - software for diagnosis |
| Risk class under MDR | Class IIa or higher |
| Annex I, Section A | High-risk as medical device AI |
Classification: HIGH-RISK (Annex I, Section A - Medical Device)
Dual Compliance Framework
| Regulation | Key Requirements |
|---|---|
| AI Act | Articles 8-15 requirements, conformity via MDR process |
| Medical Device Regulation | CE marking, clinical evaluation, post-market surveillance |
| Integration | AI Act assessment integrated into MDR conformity |
Compliance Note
Medical AI requires coordination between AI Act and MDR compliance. The notified body for medical device assessment will evaluate AI Act requirements as part of the MDR process.
Scenario 6: Predictive Policing System
The Situation
A city police department considers deploying an AI system that:
- Analyses crime data and social factors
- Predicts areas likely to experience crime
- Recommends resource allocation
- Generates "risk scores" for neighbourhoods
Classification Analysis
Step 3 - Annex III High-Risk?
| Annex III Section 6(d) Check | Result |
|---|---|
| "AI intended to be used by law enforcement" | Yes |
| "Making individual risk assessments" | Potentially |
| "Assessing risk of criminal offence" | If individual-level |
Additional Step 2 - Prohibited Check:
| Article 5(1)(d) Check | Analysis |
|---|---|
| "Risk assessments of natural persons" | If predicting individual criminal behaviour |
| "To assess or predict risk of committing offence" | May be prohibited if individual-level |
| "Based solely on profiling or on the assessment of personality traits and characteristics" | Key criterion |
Predictive Policing Classification Decision Tree
Predicts area-level crime patterns only
Predicts individual criminal likelihood
Combines area + individual profiling
Critical Distinction
Area-based predictive policing may be high-risk but permissible. Individual-based profiling to predict criminal behaviour is likely prohibited under Article 5(1)(d).
Scenario 7: Biometric Access Control
The Situation
A secure facility deploys fingerprint scanners with AI that:
- Captures fingerprint images
- Compares against enrolled employee database
- Grants or denies access
- Logs all access attempts
Classification Analysis
Biometric Analysis:
| Criterion | Assessment |
|---|---|
| Biometric identification? | Yes - fingerprint |
| One-to-one verification? | Yes - comparing to enrolled template |
| One-to-many identification? | No - not searching database |
| Real-time? | Yes - instant verification |
| Publicly accessible space? | No - secure facility |
Classification Framework:
| System Type | Classification |
|---|---|
| Remote biometric identification (public spaces, real-time) | Prohibited (Article 5(1)(h)) unless exception |
| Remote biometric identification (post-facto) | High-Risk (Annex III, Section 1(a)) |
| Biometric verification (one-to-one) | Minimal Risk (no specific Article 50 obligation; GDPR biometric data requirements apply) |
Classification: MINIMAL RISK
The fingerprint verification system is one-to-one verification, not identification, and is therefore not prohibited or high-risk under the biometric categories. It is also not an emotion recognition or biometric categorisation system, so Article 50(3) does not apply.
Compliance Obligations
| Requirement | Implementation |
|---|---|
| GDPR Art. 9 (Special Category Data) | Explicit consent or other lawful basis for biometric processing |
| GDPR | Explicit consent for biometric processing |
| Documentation | Best practice even if not mandatory |
Classification Summary Table
| Scenario | Classification | Primary Reference |
|---|---|---|
| HR Screening Tool | High-Risk | Annex III, Section 4(a) |
| Customer Service Chatbot | Limited Risk | Article 50(1) |
| Emotion Recognition at Work | Prohibited | Article 5(1)(f) |
| Credit Scoring AI | High-Risk | Annex III, Section 5(b) |
| Medical Diagnostic AI | High-Risk | Annex I, Section A (MDR) |
| Predictive Policing | High-Risk or Prohibited | Annex III Section 6 / Article 5(1)(d) |
| Biometric Access Control | Minimal Risk | N/A (Article 50(3) does not apply) |
Classification Best Practices
- Document thoroughly — Record your analysis, reasoning, and conclusion
- Consider context — Same technology may have different classifications based on use
- Check prohibitions first — Article 5 before Annex III
- Review exceptions — Article 6(3) exceptions may apply; if you invoke Art. 6(3), you must document the assessment before market placement AND register under Art. 49(2) per Art. 6(4)
- Seek expert input — Complex cases may benefit from legal review
- Monitor changes — Classification may change with system modifications
- When uncertain, assume higher risk — Conservative approach protects compliance
What You Learned
Key concepts from this chapter
Classification is the foundational compliance activity—get it right first
Follow a systematic approach: Prohibitions → Annex III → Sectoral legislation → Exceptions → Limited risk
Context and intended purpose drive classification, not just technology
The same AI technology can have different classifications based on deployment context
Prohibited practices have no compliance pathway—they simply cannot be deployed
Chapter Complete
High-Risk AI Compliance
13/14
chapters