aicomply.
Lesson15 minChapter 5 of 9

Prohibited AI Practices

The eight categories of AI practices banned under Article 5.

Learning Objectives

By the end of this chapter, you will be able to:

  • Identify and explain all eight categories of prohibited AI practices
  • Understand the specific conditions that trigger each prohibition
  • Recognize the narrow exceptions that exist for certain prohibitions
  • Conduct an internal audit to identify potentially prohibited AI
  • Understand the severe penalties and enforcement timeline

Article 5 establishes the AI Act's "red lines"—eight categories of AI practices so harmful to fundamental rights, democracy, and human dignity that they are completely banned. There is no compliance pathway for prohibited practices; they simply cannot exist in the EU market.

CRITICAL DEADLINE: Prohibited AI practices must cease by February 2, 2025. This is the first enforcement date under the AI Act. Organizations must complete their prohibition audit immediately.

Understanding the Prohibition Structure

Each prohibition in Article 5 has a specific structure:

  1. Conduct: What the AI system does
  2. Conditions: Circumstances that trigger the prohibition
  3. Harm threshold: Level of harm required (if any)
  4. Exceptions: Narrow carve-outs (some prohibitions only)

The Eight Prohibited Categories

Article 5: Prohibited AI Practices

Social Scoring

Government social credit systems

Subliminal Manipulation

Techniques exploiting vulnerabilities

Real-time Biometric ID

Public space surveillance (with exceptions)

Emotion Recognition

At workplace or educational institutions

Biometric Categorization

Based on sensitive characteristics

Predictive Policing

Individual crime prediction

Facial Recognition DB

Untargeted scraping for databases

Exploitation

Targeting vulnerable groups

These practices are strictly prohibited with no exceptions (except limited law enforcement uses)

1. Subliminal and Manipulative Techniques (Article 5(1)(a))

What is prohibited: AI systems that deploy subliminal techniques beyond a person's consciousness, OR purposefully manipulative or deceptive techniques.

Conditions for prohibition:

  • The objective OR effect must be to materially distort behaviour
  • Must impair the ability to make an informed decision
  • Must cause or be reasonably likely to cause significant harm

Example Scenarios:

ScenarioProhibited?Reasoning
AI using subliminal audio to encourage purchasesYesSubliminal technique distorting consumer behaviour
Personalised advertising based on preferencesNoNot subliminal; user aware of advertising
AI chatbot using dark patterns to prevent cancellationLikely YesManipulative technique distorting behaviour
AI recommendation showing relevant productsNoNot manipulation; user retains informed choice

Expert Insight

The key distinction is whether the technique operates "beyond consciousness" or is "purposefully manipulative." Persuasion through transparent means is permitted; hidden manipulation is not.


2. Exploitation of Vulnerabilities (Article 5(1)(b))

What is prohibited: AI systems that exploit vulnerabilities of specific persons or groups due to:

  • Age (children, elderly)
  • Disability
  • Specific social or economic situation

Conditions for prohibition:

  • Must materially distort the behaviour of a person or group
  • Must cause or be reasonably likely to cause significant harm

Example Scenarios:

ScenarioProhibited?Reasoning
AI targeting gambling ads to debt-ridden individualsYesExploits economic vulnerability
AI toy manipulating children into purchasesYesExploits age-related vulnerability
Accessibility AI helping disabled users navigateNoAssists rather than exploits
AI providing financial education to low-income usersNoHelps rather than exploits vulnerability

3. Social Scoring (Article 5(1)(c))

What is prohibited: AI systems for evaluating or classifying natural persons or groups over a certain period of time based on social behaviour or personal/personality characteristics, where the social score leads to:

  • Detrimental treatment in unrelated contexts, OR
  • Treatment that is unjustified or disproportionate to the behaviour

Key nuances:

  • The "over a certain period of time" qualifier means that one-off classifications may be treated differently from ongoing surveillance-style scoring
  • Applies to both public authorities AND private entities
  • The prohibition is about the consequences of scoring, not scoring itself
  • Legitimate credit scoring based on financial history remains permitted

Example Scenarios:

ScenarioProhibited?Reasoning
System denying housing based on social media activityYesUnrelated context punishment
Credit score based on financial repayment historyNoRelated context, proportionate use
Employee rating affecting unrelated healthcare accessYesUnrelated context
Loyalty programme with tiered benefitsNoProportionate to customer relationship

4. Predictive Policing Based Solely on Profiling (Article 5(1)(d))

What is prohibited: AI systems making risk assessments to predict whether a person will commit a criminal offence, based solely on:

  • Profiling, OR
  • Personality traits or characteristics

What is NOT prohibited: AI systems supporting human assessment based on objective, verifiable facts directly linked to criminal activity.

Example Scenarios:

ScenarioProhibited?Reasoning
AI predicting criminality from personality testsYesBased solely on personality traits
AI flagging individuals based on neighbourhoodYesProfiling without objective facts
AI analysing evidence from ongoing investigationNoBased on objective, verifiable facts
AI identifying patterns in crime data (not individuals)NoNot individual risk assessment

5. Untargeted Facial Recognition Database Scraping (Article 5(1)(e))

What is prohibited: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from:

  • The internet, OR
  • CCTV footage

Key elements:

  • "Untargeted" is crucial—targeted, lawful collection may be permitted
  • Prevents creation of mass surveillance infrastructure
  • Applies regardless of whether the database is used

Compliance Note

Companies like Clearview AI, which scraped billions of facial images from social media, would be prohibited under this provision.


6. Emotion Recognition in Workplace and Education (Article 5(1)(f))

What is prohibited: AI systems to infer emotions in:

  • Workplace settings, OR
  • Educational institutions

Exceptions:

  • Medical purposes (detecting pain, fatigue for safety)
  • Safety purposes (driver drowsiness detection)

Example Scenarios:

ScenarioProhibited?Reasoning
AI monitoring employee mood during meetingsYesWorkplace emotion inference
AI assessing student engagement via webcamYesEducational emotion inference
AI detecting driver fatigue in trucksNoSafety exception applies
AI monitoring patient pain levels in hospitalNoMedical exception applies

7. Biometric Categorisation of Sensitive Characteristics (Article 5(1)(g))

What is prohibited: Biometric categorisation systems that individually categorise persons by inferring:

  • Race
  • Political opinions
  • Trade union membership
  • Religious or philosophical beliefs
  • Sex life or sexual orientation

Exception: Labelling or filtering of lawfully acquired biometric datasets, or categorisation in law enforcement context.


8. Real-Time Remote Biometric Identification in Public Spaces (Article 5(1)(h))

What is prohibited: Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes.

Narrow Exceptions (requiring strict conditions):

  1. Targeted search for victims of abduction, trafficking, or sexual exploitation
  2. Prevention of specific, substantial, imminent threat to life or terrorist attack
  3. Locating/identifying suspects of serious criminal offences (Annex II list)

Conditions for exceptions (Article 5(2)-(3)):

  • Must be strictly necessary and proportionate
  • Deployed only to confirm the identity of the specifically targeted individual
  • A completed fundamental rights impact assessment is required (per Article 27)
  • Must be registered in the EU database (per Article 49), with an urgency exception allowing post-hoc registration
  • Prior authorisation from a judicial authority or independent administrative authority whose decision is binding (or within 24 hours post-hoc in urgent cases)
  • If authorisation is rejected, use must stop immediately and all data, results, and outputs must be immediately discarded and deleted
  • Temporal, geographic, and personal scope limitations
  • No decision producing an adverse legal effect may be taken based solely on the output of the system

Notification and reporting requirements (Article 5(4)-(7)):

  • Each use must be notified to the relevant market surveillance authority and the national data protection authority
  • Member States may decide to fully or partially authorise real-time RBI use and must lay down detailed national rules, notifying the Commission within 30 days (Article 5(5))
  • National authorities must submit annual reports to the Commission on real-time RBI use (Article 5(6))

Annex II Serious Offences include: Terrorism, trafficking, murder, rape, armed robbery, participation in criminal organisation, environmental crimes, and others punishable by a custodial sentence with a maximum period of at least four years (i.e., the maximum possible sentence must be at least four years, not that the minimum sentence is four years).

Compliance Note

Article 5(8) clarifies that these prohibitions are **in addition to**, not a replacement of, other EU law prohibitions. Point (h) is also without prejudice to Article 9 of the GDPR for the processing of biometric data for purposes other than law enforcement.

Expert Insight

This prohibition addresses the most controversial AI application. The exceptions are so narrow that real-time facial recognition in public will remain rare and heavily controlled.


Prohibition Self-Audit Checklist

Use this checklist to audit your AI systems:

QuestionIf Yes
Does your AI use subliminal or hidden manipulative techniques?Review Article 5(1)(a)
Does your AI specifically target vulnerable populations?Review Article 5(1)(b)
Does your AI create "scores" affecting unrelated life areas?Review Article 5(1)(c)
Does your AI predict individual criminal risk from profiles?Review Article 5(1)(d)
Does your AI scrape facial images from internet/CCTV?Review Article 5(1)(e)
Does your AI infer emotions at work or school?Review Article 5(1)(f)
Does your AI categorise people by sensitive characteristics via biometrics?Review Article 5(1)(g)
Does your AI do real-time facial recognition in public for law enforcement?Review Article 5(1)(h)

Penalties: The Maximum Tier

Prohibited practices carry the highest penalty tier under Article 99:

Entity TypeMaximum Fine
Large enterprises€35 million or 7% global turnover (whichever higher)
SMEs and startupsLower of the two amounts

⚠️ No Transition Period: Unlike high-risk AI requirements, there is NO grace period for prohibited practices. All prohibited AI must cease by February 2, 2025—regardless of when deployed.

What You Learned

Key concepts from this chapter

**Eight categories** of AI practices are completely prohibited under Article 5

Prohibitions take effect **February 2, 2025**—the first AI Act enforcement deadline

Violations carry **maximum penalties** of €35M or 7% of global annual turnover

Some prohibitions have **narrow exceptions** (emotion recognition, real-time biometrics)

**No transition period** exists—all prohibited AI must cease immediately

Chapter Complete

AI Act Fundamentals

5/9

chapters