aicomply.
Lesson20 minChapter 13 of 14

Practical Exercise: Risk Classification

Apply your knowledge to real-world AI system classification scenarios.

Learning Objectives

By the end of this exercise, you will be able to:

  • Apply the AI Act's risk classification framework to real-world scenarios
  • Identify prohibited, high-risk, limited risk, and minimal risk AI systems
  • Reference correct articles and annexes for each classification
  • Determine key compliance obligations for high-risk systems
  • Document and justify classification decisions

This practical exercise helps you apply your knowledge of the AI Act's risk classification framework to realistic scenarios. Classification is the foundational compliance activity—it determines all subsequent obligations.

Classification Methodology

Step-by-Step Approach

For each AI system, follow this systematic analysis:

StepQuestionAction
1Is this an AI system under Article 3(1)?Confirm machine-based, autonomy, outputs influencing decisions
2Is it prohibited under Article 5?Check all eight prohibition categories
3Is it listed in Annex III?Check all eight high-risk domains
4Is it a safety component in Annex I products?Check sectoral legislation
5Does Article 6(3) exception apply?Check profiling, narrow scope, human oversight exceptions
6Does it have transparency obligations?Check Article 50 categories
7Document conclusionRecord reasoning and evidence

Scenario 1: HR Screening Tool

The Situation

A multinational company deploys an AI system to screen incoming job applications. The system:

  • Analyses CVs and cover letters
  • Assigns scores to candidates based on predicted job fit
  • Filters out candidates below a threshold score
  • Ranks remaining candidates for human review

Classification Analysis

Step 1 - AI System? Yes. Machine-based system processing inputs (CVs), generating outputs (scores/rankings) influencing hiring decisions.

Step 2 - Prohibited? No. Not subliminal manipulation, exploitation of vulnerabilities, social scoring, or other Article 5 categories.

Step 3 - Annex III High-Risk?

Annex III CheckResult
Section 4(a): Recruitment/selectionYes - Explicit match
"AI intended to be used for recruitment or selection"Directly applies
"Filtering applications, evaluating candidates"Matches exactly

Step 4 - Safety Component? No. Not a product safety component.

Step 5 - Article 6(3) Exception?

Exception CriterionAnalysisResult
Intended for narrow procedural task?No - substantive candidate evaluationNot applicable
Intended to improve previous human assessment?No - provides initial screeningNot applicable
Intended to detect decision patterns?No - makes substantive recommendationsNot applicable
Human review always before action?Partially - but AI filters out candidatesNot sufficient

Classification: HIGH-RISK (Annex III, Section 4(a))

Compliance Obligations

RequirementArticleKey Actions
Risk managementArticle 9Assess bias, discrimination, accuracy risks
Data governanceArticle 10Training data quality, representativeness
Technical documentationArticle 11Complete Annex IV file
TransparencyArticle 13Instructions for HR teams
Human oversightArticle 14HR professional final decisions
AccuracyArticle 15Validate performance across demographics
Conformity assessmentArticle 43Internal control (Annex VI)
RegistrationArticle 49EU database registration

⚠️ Red Flag: If the system automatically rejects candidates without any human review, this could exacerbate compliance risks. Ensure meaningful human oversight for rejection decisions.


Scenario 2: Customer Service Chatbot

The Situation

An e-commerce company deploys an AI chatbot that:

  • Answers customer questions about products
  • Processes returns and complaints
  • Escalates complex issues to human agents
  • Uses a conversational interface

Classification Analysis

Step 1 - AI System? Yes. Uses natural language processing to generate responses influencing customer interactions.

Step 2 - Prohibited? No. Not in any Article 5 category.

Step 3 - Annex III High-Risk?

Annex III CategoryApplies?
Section 1: BiometricsNo
Section 2: Critical infrastructureNo
Section 3: EducationNo
Section 4: EmploymentNo
Section 5: Essential servicesNot credit, not insurance, not benefits
Section 6: Law enforcementNo
Section 7: Migration/asylumNo
Section 8: JusticeNo

Step 4 - Safety Component? No.

Step 5 - Limited Risk (Article 50)?

Article 50 CheckResult
50(1): AI system designed for direct interaction?Yes
Transparency obligation triggered?Yes

Classification: LIMITED RISK (Article 50)

Compliance Obligations

RequirementImplementation
Transparency (Article 50(1))Clearly inform users they are interacting with an AI system
Disclosure timingBefore or at the beginning of interaction
ExceptionNot required if obvious from context

Implementation Examples:

  • "Hi! I'm an AI assistant. How can I help you today?"
  • Banner stating "You are chatting with our AI customer service bot"
  • Clear indication in interface that this is automated assistance

💡 Best Practice: Even though not high-risk, document the chatbot's capabilities, limitations, and escalation procedures. This demonstrates responsible AI practices and aids future compliance assessments.


Scenario 3: Emotion Recognition at Work

The Situation

A manufacturing company proposes installing cameras with AI that:

  • Monitors workers' facial expressions throughout their shift
  • Infers emotional states (stress, fatigue, happiness)
  • Alerts supervisors when workers appear stressed
  • Stated purpose: "improving worker wellbeing"

Classification Analysis

Step 2 - Prohibited?

Article 5(1)(f) CheckAnalysis
"AI systems that infer emotions"Yes - inferring stress, fatigue, happiness
"Of a natural person"Yes - workers
"In the workplace"Yes - manufacturing facility
"Educational institution"N/A

Exception Analysis:

ExceptionApplies?
Medical reasonsNo - "wellbeing" is not medical treatment
Safety reasonsPotentially - if fatigue monitoring prevents accidents

Classification: PROHIBITED (Article 5(1)(f)) — unless the company can demonstrate a genuine safety purpose.

Critical Analysis

If the company argues safety exception:

CriterionAssessment
Primary purpose safety?Must be accident prevention, not productivity
ProportionalityLeast intrusive means to achieve safety?
Worker consentInformed consent present?
Alternative measuresOther safety measures considered?

Compliance Note

"Improving wellbeing" is explicitly **not** a valid exception. The company would need to demonstrate:

  1. Documented safety risks from worker fatigue
  2. This system is necessary for safety (not just helpful)
  3. Less intrusive alternatives are inadequate
  4. The system is limited to genuine safety monitoring

If Safety Exception Does Not Apply:

  • Deployment is prohibited
  • No compliance pathway exists
  • Penalties up to €35 million or 7% global turnover

Scenario 4: Credit Scoring AI

The Situation

A retail bank deploys an AI system that:

  • Evaluates loan applications
  • Analyses applicant data (income, employment, credit history)
  • Generates creditworthiness scores
  • Recommends approval, denial, or conditions

Classification Analysis

Step 3 - Annex III High-Risk?

Annex III Section 5(b) CheckResult
"AI intended to be used to evaluate creditworthiness"Explicit match
"Establish the credit score of natural persons"Explicit match
"Exception: detecting financial fraud"Does not apply - this is credit scoring

Classification: HIGH-RISK (Annex III, Section 5(b))

Enhanced Compliance Considerations

RequirementSpecific Considerations for Credit Scoring
Risk managementDiscrimination risks across protected characteristics
Data governanceTraining data representativeness across demographics
TransparencyClear explanation of factors influencing decisions
Human oversightHuman review for marginal cases, appeals process
AccuracyValidate predictions across different applicant groups
GDPR Article 22Right not to be subject to solely automated decisions
Consumer credit lawSector-specific obligations may apply

💡 Expert Note: Credit scoring AI faces heightened scrutiny for discrimination. Ensure robust fairness testing across age, gender, race, and other protected characteristics. Document all bias mitigation measures.


Scenario 5: Medical Diagnostic AI

The Situation

A hospital deploys an AI system that:

  • Analyses medical imaging (X-rays, MRIs)
  • Identifies potential abnormalities
  • Suggests diagnoses to radiologists
  • Flags urgent cases for priority review

Classification Analysis

Step 4 - Safety Component in Annex I Product?

AnalysisResult
Medical Device Regulation (EU) 2017/745Applies
Is this a medical device?Yes - software for diagnosis
Risk class under MDRClass IIa or higher
Annex I, Section AHigh-risk as medical device AI

Classification: HIGH-RISK (Annex I, Section A - Medical Device)

Dual Compliance Framework

RegulationKey Requirements
AI ActArticles 8-15 requirements, conformity via MDR process
Medical Device RegulationCE marking, clinical evaluation, post-market surveillance
IntegrationAI Act assessment integrated into MDR conformity

Compliance Note

Medical AI requires coordination between AI Act and MDR compliance. The notified body for medical device assessment will evaluate AI Act requirements as part of the MDR process.


Scenario 6: Predictive Policing System

The Situation

A city police department considers deploying an AI system that:

  • Analyses crime data and social factors
  • Predicts areas likely to experience crime
  • Recommends resource allocation
  • Generates "risk scores" for neighbourhoods

Classification Analysis

Step 3 - Annex III High-Risk?

Annex III Section 6(d) CheckResult
"AI intended to be used by law enforcement"Yes
"Making individual risk assessments"Potentially
"Assessing risk of criminal offence"If individual-level

Additional Step 2 - Prohibited Check:

Article 5(1)(d) CheckAnalysis
"Risk assessments of natural persons"If predicting individual criminal behaviour
"To assess or predict risk of committing offence"May be prohibited if individual-level
"Based solely on profiling or on the assessment of personality traits and characteristics"Key criterion

Predictive Policing Classification Decision Tree

Predicts area-level crime patterns only

High-RiskAnnex III, Section 6

Predicts individual criminal likelihood

May be PROHIBITEDArticle 5(1)(d)

Combines area + individual profiling

PROHIBITEDArticle 5(1)(d)

Critical Distinction

Area-based predictive policing may be high-risk but permissible. Individual-based profiling to predict criminal behaviour is likely prohibited under Article 5(1)(d).


Scenario 7: Biometric Access Control

The Situation

A secure facility deploys fingerprint scanners with AI that:

  • Captures fingerprint images
  • Compares against enrolled employee database
  • Grants or denies access
  • Logs all access attempts

Classification Analysis

Biometric Analysis:

CriterionAssessment
Biometric identification?Yes - fingerprint
One-to-one verification?Yes - comparing to enrolled template
One-to-many identification?No - not searching database
Real-time?Yes - instant verification
Publicly accessible space?No - secure facility

Classification Framework:

System TypeClassification
Remote biometric identification (public spaces, real-time)Prohibited (Article 5(1)(h)) unless exception
Remote biometric identification (post-facto)High-Risk (Annex III, Section 1(a))
Biometric verification (one-to-one)Minimal Risk (no specific Article 50 obligation; GDPR biometric data requirements apply)

Classification: MINIMAL RISK

The fingerprint verification system is one-to-one verification, not identification, and is therefore not prohibited or high-risk under the biometric categories. It is also not an emotion recognition or biometric categorisation system, so Article 50(3) does not apply.

Compliance Obligations

RequirementImplementation
GDPR Art. 9 (Special Category Data)Explicit consent or other lawful basis for biometric processing
GDPRExplicit consent for biometric processing
DocumentationBest practice even if not mandatory

Classification Summary Table

ScenarioClassificationPrimary Reference
HR Screening ToolHigh-RiskAnnex III, Section 4(a)
Customer Service ChatbotLimited RiskArticle 50(1)
Emotion Recognition at WorkProhibitedArticle 5(1)(f)
Credit Scoring AIHigh-RiskAnnex III, Section 5(b)
Medical Diagnostic AIHigh-RiskAnnex I, Section A (MDR)
Predictive PolicingHigh-Risk or ProhibitedAnnex III Section 6 / Article 5(1)(d)
Biometric Access ControlMinimal RiskN/A (Article 50(3) does not apply)

Classification Best Practices

  1. Document thoroughly — Record your analysis, reasoning, and conclusion
  2. Consider context — Same technology may have different classifications based on use
  3. Check prohibitions first — Article 5 before Annex III
  4. Review exceptions — Article 6(3) exceptions may apply; if you invoke Art. 6(3), you must document the assessment before market placement AND register under Art. 49(2) per Art. 6(4)
  5. Seek expert input — Complex cases may benefit from legal review
  6. Monitor changes — Classification may change with system modifications
  7. When uncertain, assume higher risk — Conservative approach protects compliance

What You Learned

Key concepts from this chapter

Classification is the foundational compliance activity—get it right first

Follow a systematic approach: Prohibitions → Annex III → Sectoral legislation → Exceptions → Limited risk

Context and intended purpose drive classification, not just technology

The same AI technology can have different classifications based on deployment context

Prohibited practices have no compliance pathway—they simply cannot be deployed

Chapter Complete

High-Risk AI Compliance

13/14

chapters