aicomply.
STD-AI-015

Prohibited AI Practices Standard

Identify, prevent, and monitor prohibited AI practices under Article 5 of the EU AI Act.

5

Controls

0

Compliant

0

In Progress

5

Not Started

Overall Progress
0%
Implementation Guidance
Detailed guidance for implementing this standard

Prohibited AI Practices Standard

Document Type: Standard Standard ID: STD-AI-015 Standard Title: Prohibited AI Practices Standard Version: 1.0 Effective Date: 2025-02-02 Next Review Date: 2026-02-02 Review Frequency: Annually or upon regulatory change Parent Policy: POL-AI-001 - Artificial Intelligence Policy Owner: AI Act Program Manager Approved By: AI Governance Committee Chair Status: Draft Classification: Internal Use Only


TABLE OF CONTENTS

  1. Document History
  2. Objective
  3. Scope and Applicability
  4. Control Standard
  5. Supporting Procedures
  6. Compliance
  7. Roles and Responsibilities
  8. Exceptions
  9. Enforcement
  10. Key Performance Indicators (KPIs)
  11. Training Requirements
  12. Definitions
  13. Link with AI Act and ISO42001

DOCUMENT HISTORY

VersionDateAuthorChangesApproval DateApproved By
0.12025-01-10AI Act Program ManagerInitial draft--
0.22025-01-20AI Act Program ManagerAdded Article 5 subsection mapping--
0.32025-01-28AI Act Program ManagerIncorporated legal review and stakeholder feedback--
1.02025-02-02AI Act Program ManagerFinal version approved - GRC restructured2025-02-01Jane Doe, AI Governance Committee Chair

OBJECTIVE

This standard defines requirements for identifying, preventing, and monitoring prohibited AI practices under EU AI Act Article 5. The prohibited practices provisions took effect on 2 February 2025, making compliance immediately mandatory.

Primary Goals:

  • Identify and screen all AI systems against Article 5 prohibited practices before deployment
  • Prevent the deployment of AI systems that use subliminal, manipulative, or deceptive techniques
  • Ensure compliance with biometric and emotion recognition prohibitions
  • Prevent social scoring and profiling-only predictive policing systems
  • Continuously monitor deployed AI systems for prohibited practice violations

Regulatory Context:

Article 5 of the EU AI Act establishes an absolute prohibition on certain AI practices deemed unacceptable due to their potential to violate fundamental rights. Violations carry the highest penalty tier under the AI Act: administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. These prohibitions apply to all organisations regardless of role (provider, deployer, importer, distributor) and cannot be mitigated or managed -- they must be prevented entirely.


SCOPE AND APPLICABILITY

2.1 Mandatory Applicability

This standard is mandatory for:

  • All AI systems developed, deployed, or distributed by the organisation
  • All AI systems procured from third-party providers
  • All AI system components and subsystems that interact with natural persons
  • All biometric data processing systems using AI
  • All AI-driven scoring, rating, or classification systems applied to natural persons

2.2 Recommended Applicability

This standard is recommended for:

  • Non-AI automated decision-making systems (to prevent drift into prohibited territory)
  • AI research and development activities (to embed compliance from design phase)
  • Third-party AI integrations and APIs consumed by the organisation

2.3 Prohibited Practices Covered

This standard addresses all eight categories of prohibited AI practices under Article 5(1):

ReferenceProhibited PracticeSummary
Art. 5(1)(a)Subliminal techniquesAI deploying subliminal techniques beyond consciousness to materially distort behaviour
Art. 5(1)(b)Exploitation of vulnerabilitiesAI exploiting vulnerabilities due to age, disability, or social/economic situation
Art. 5(1)(c)Social scoringAI evaluating/classifying persons based on social behaviour leading to detrimental treatment
Art. 5(1)(d)Predictive policing (profiling-only)AI assessing individual risk of criminal offence solely based on profiling or personality traits
Art. 5(1)(e)Untargeted facial recognition scrapingCreating/expanding facial recognition databases through untargeted scraping
Art. 5(1)(f)Emotion inference in workplace/educationInferring emotions in workplace and educational institutions (except medical/safety)
Art. 5(1)(g)Biometric categorisation (protected characteristics)Categorising persons by race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation
Art. 5(1)(h)Real-time remote biometric identificationReal-time remote biometric identification in publicly accessible spaces for law enforcement (subject to narrow exceptions)

2.4 Out of Scope

  • AI systems used exclusively outside the EU (unless their output is used within the EU)
  • Non-AI biometric systems (covered by GDPR and national data protection law)
  • AI systems that have been decommissioned and are no longer operational

CONTROL STANDARD

Control PROH-001: Prohibited Practice Identification and Screening

Control ID: PROH-001 Control Name: Prohibited Practice Identification and Screening Control Type: Preventive Control Frequency: Before each AI system deployment, quarterly review Risk Level: High

Control Objective

Screen all AI systems against Article 5 prohibited practices before deployment to ensure no prohibited AI practice is introduced into the organisation's operations.

Control Requirements

CR-001.1: Prohibited Practices Register

Maintain a comprehensive register of all Article 5 prohibited practices, updated as regulatory guidance evolves.

Register Contents:

FieldDescriptionExample
Prohibition IDUnique identifierPROH-ART5-1A
Article ReferenceEU AI Act article and paragraphArticle 5(1)(a)
Practice DescriptionPlain-language description of the prohibited practiceDeploying subliminal techniques beyond consciousness
IndicatorsObservable indicators that a system may engage in this practiceHidden persuasion layers, sub-threshold stimuli
Screening QuestionsQuestions to ask during screeningDoes the system use any technique designed to influence users below conscious awareness?
Last UpdatedDate of last review2025-02-02

Mandatory Actions:

  • Maintain a prohibited practices register aligned with Article 5
  • Screen all new AI systems against prohibited practices before deployment
  • Document screening results for each AI system
  • Escalate potential violations to AI Governance Committee immediately
  • Re-screen existing AI systems when Article 5 guidance or interpretations change
  • Maintain screening templates and checklists

CR-001.2: Pre-Deployment Screening Process

Screen every AI system before deployment using a structured screening process.

Screening Steps:

StepActivityResponsibleOutput
1Identify AI system purpose and functionalityAI System OwnerSystem description
2Map system against each Article 5 prohibitionAI Act Program ManagerScreening matrix
3Assess risk indicators for each prohibitionAI Act Program ManagerRisk assessment
4Document screening outcome (pass/fail/escalate)AI Act Program ManagerScreening record
5Obtain sign-off for deploymentAI Governance CommitteeApproval record

Screening Outcomes:

OutcomeDefinitionAction Required
PassNo prohibited practice indicators identifiedProceed to deployment
EscalatePotential prohibited practice indicators require further analysisRefer to Legal and AI Governance Committee
FailProhibited practice identifiedHalt deployment immediately; do not deploy

Evidence Required:

  • Prohibited practices register
  • Screening records and results
  • Escalation records
  • Screening templates and checklists
  • AI system inventory with screening status

Audit Verification:

  • Verify prohibited practices register is maintained and current
  • Confirm all AI systems screened before deployment
  • Check screening documentation is complete for each system
  • Validate escalation procedures followed where applicable
  • Verify 100% screening coverage

Control PROH-002: Subliminal/Manipulative Technique Prevention

Control ID: PROH-002 Control Name: Subliminal and Manipulative Technique Prevention Control Type: Preventive Control Frequency: Before each AI system deployment, annual review Risk Level: Critical

Control Objective

Ensure no AI system deploys subliminal techniques beyond a person's consciousness or manipulative or deceptive techniques that materially distort behaviour, in compliance with Article 5(1)(a) and Article 5(1)(b).

Control Requirements

CR-002.1: Subliminal Technique Assessment (Article 5(1)(a))

Assess all AI systems for subliminal techniques that operate below conscious awareness.

Subliminal Technique Indicators:

IndicatorDescriptionDetection Method
Sub-threshold stimuliVisual, auditory, or other stimuli below perception thresholdTechnical review of output modalities
Hidden persuasion layersEmbedded persuasion mechanisms not apparent to userArchitecture review and output analysis
Unconscious behavioural nudgingTechniques designed to influence without awarenessBehavioural analysis of system interactions
Covert data-driven personalisationPersonalisation exploiting unconscious biasesAlgorithm review and A/B testing analysis

Mandatory Actions:

  • Assess all user-facing AI systems for subliminal technique risk
  • Review AI system architecture for hidden influence mechanisms
  • Test AI outputs for sub-threshold or imperceptible influence patterns
  • Document assessment findings and design decisions
  • Prohibit deployment of any system with identified subliminal techniques

CR-002.2: Manipulative and Deceptive Technique Assessment (Article 5(1)(b))

Assess all AI systems for manipulative or deceptive techniques that exploit vulnerabilities of specific groups.

Vulnerability Exploitation Indicators:

Vulnerability GroupExamplesProhibited Exploitation
Age-relatedChildren, elderly personsExploiting limited understanding or cognitive decline
Disability-relatedPersons with cognitive, physical, or sensory disabilitiesExploiting reduced capacity to understand or resist
Social/economic situationPersons in financial distress, social isolationExploiting desperation or limited alternatives

Mandatory Actions:

  • Identify whether AI system interacts with vulnerable groups
  • Assess persuasion mechanisms for exploitative characteristics
  • Test for disproportionate impact on vulnerable users
  • Review AI-generated content for deceptive characteristics
  • Document vulnerability impact assessments

Evidence Required:

  • Manipulation risk assessments
  • Design review records
  • Testing results for influence patterns
  • AI system design documentation
  • Content review records
  • Vulnerability impact assessments

Audit Verification:

  • Verify manipulation risk assessments conducted for all user-facing AI
  • Confirm subliminal technique testing performed
  • Check vulnerability impact assessments documented
  • Validate no systems deployed with identified prohibited techniques

Control PROH-003: Biometric and Emotion Recognition Controls

Control ID: PROH-003 Control Name: Biometric and Emotion Recognition Compliance Controls Control Type: Preventive Control Frequency: Before each AI system deployment, annual review Risk Level: Critical

Control Objective

Ensure compliance with prohibitions on untargeted facial recognition scraping (Article 5(1)(e)), emotion inference in workplace and education (Article 5(1)(f)), biometric categorisation by protected characteristics (Article 5(1)(g)), and real-time remote biometric identification in public spaces for law enforcement (Article 5(1)(h)).

Control Requirements

CR-003.1: Biometric System Inventory

Maintain a comprehensive inventory of all AI systems that process biometric data.

Inventory Fields:

FieldDescriptionRequired
System IDUnique identifierYes
System NameDescriptive nameYes
Biometric TypeFacial, voice, gait, fingerprint, etc.Yes
Processing PurposeIdentification, verification, categorisation, emotion inferenceYes
Data SourcesWhere biometric data is obtainedYes
Target PopulationWho is subject to biometric processingYes
Article 5 AssessmentWhich Art. 5 prohibitions assessed, outcomeYes
Lawful BasisLegal basis for any permitted biometric processingYes

CR-003.2: Untargeted Facial Recognition Scraping Prevention (Article 5(1)(e))

Prevent creation or expansion of facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

Mandatory Actions:

  • Prohibit procurement of facial recognition databases built through untargeted scraping
  • Verify data provenance for all facial recognition training data
  • Audit third-party facial recognition providers for data sourcing compliance
  • Contractually require Article 5(1)(e) compliance from all biometric data suppliers

CR-003.3: Emotion Inference Prohibition in Workplace/Education (Article 5(1)(f))

Prohibit AI systems that infer emotions of natural persons in workplace and educational institution settings, except where the system is intended for medical or safety reasons.

Prohibited Uses:

ContextProhibited UsePermitted Exception
WorkplaceMonitoring employee emotional states for performance, productivity, or engagementMedical purposes (e.g., detecting fatigue in safety-critical roles)
EducationMonitoring student emotional states for attention, engagement, or behaviour assessmentMedical purposes (e.g., detecting distress for wellbeing)
RecruitmentInferring candidate emotions during interviewsNone

CR-003.4: Biometric Categorisation by Protected Characteristics (Article 5(1)(g))

Prohibit biometric categorisation systems that individually categorise natural persons based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

Mandatory Actions:

  • Review all biometric systems for categorisation by protected characteristics
  • Prohibit biometric categorisation that outputs or infers protected characteristic categories
  • Ensure biometric systems used for lawful purposes do not indirectly produce prohibited categorisations
  • Document permitted biometric use cases and their boundaries

CR-003.5: Real-Time Remote Biometric Identification (Article 5(1)(h))

Prohibit real-time remote biometric identification systems in publicly accessible spaces for law enforcement, subject to narrow exceptions requiring prior judicial or administrative authorisation.

Note: This prohibition primarily applies to law enforcement authorities. Organisations should ensure they do not provide, supply, or facilitate such systems without appropriate legal basis and authorisation.

Evidence Required:

  • Biometric system inventory
  • Use case documentation and lawful basis records
  • Access control records
  • Prohibition enforcement records
  • Data source verification records
  • Third-party compliance audit records

Audit Verification:

  • Verify biometric system inventory is complete and current
  • Confirm each biometric system assessed against Article 5 prohibitions
  • Check data provenance records for facial recognition systems
  • Validate no prohibited emotion inference in workplace/education contexts
  • Verify no biometric categorisation by protected characteristics

Control PROH-004: Social Scoring and Predictive Policing Prevention

Control ID: PROH-004 Control Name: Social Scoring and Predictive Policing Prevention Control Type: Preventive Control Frequency: Before each AI system deployment, annual review Risk Level: Critical

Control Objective

Ensure no AI system performs social scoring (Article 5(1)(c)) or individual risk assessment based solely on profiling or personality traits for predictive policing purposes (Article 5(1)(d)).

Control Requirements

CR-004.1: Social Scoring Prevention (Article 5(1)(c))

Prevent AI systems from evaluating or classifying natural persons or groups based on social behaviour or known, inferred, or predicted personal or personality characteristics, where the social score leads to detrimental or unfavourable treatment in social contexts unrelated to the context in which the data was generated, or treatment that is unjustified or disproportionate to the social behaviour.

Social Scoring Indicators:

IndicatorDescriptionExample
Cross-context data aggregationCombining data from unrelated contexts to produce a composite scoreUsing social media activity to determine creditworthiness
Generalised trustworthiness scoringProducing a general trustworthiness or reliability score for a personCitizen scoring systems
Behavioural classification leading to penaltiesClassifying persons by behaviour resulting in negative treatmentPenalising persons for lawful associations or activities
Disproportionate treatmentTreatment that is disproportionate to the original behaviourDenying public services based on minor social infractions

Mandatory Actions:

  • Review all scoring, rating, and classification AI systems for social scoring characteristics
  • Ensure scoring systems do not aggregate data across unrelated contexts
  • Verify that any AI-driven assessments of persons do not lead to unjustified or disproportionate treatment
  • Prohibit generalised trustworthiness or social credit scoring

CR-004.2: Predictive Policing Prevention (Article 5(1)(d))

Prevent AI systems from making or contributing to individual risk assessments of natural persons for predicting the risk of criminal offence, based solely on profiling or on the assessment of personality traits and characteristics. This prohibition does not apply to AI systems used to support the human assessment of involvement in criminal activity based on objective and verifiable facts directly linked to criminal activity.

Assessment Criteria:

CriterionCompliantNon-Compliant
Basis for assessmentObjective, verifiable facts linked to criminal activityProfiling or personality traits alone
Human involvementAI supports human decision-makingAI makes autonomous determinations
Data usedFactual evidence of specific conductDemographic data, personality assessments, behavioural predictions
ScopeSpecific investigation with factual basisGeneral population risk screening

Mandatory Actions:

  • Review all AI systems that assess risk related to natural persons
  • Verify objective factual basis for any AI-driven risk assessments
  • Prohibit personality-trait-only or profiling-only assessments for criminal risk
  • Ensure human oversight for any risk assessment related to criminal activity
  • Document methodology and factual basis for all AI-driven person assessments

Evidence Required:

  • Scoring system reviews
  • Methodology documentation
  • Factual basis verification records
  • Classification system audits
  • Assessment design documentation

Audit Verification:

  • Verify all scoring/classification systems reviewed for social scoring
  • Confirm no cross-context data aggregation for person scoring
  • Check factual basis documented for any criminal risk assessment AI
  • Validate methodology documentation is complete
  • Verify no profiling-only assessments are in use

Control PROH-005: Ongoing Monitoring and Compliance Review

Control ID: PROH-005 Control Name: Ongoing Prohibited Practices Monitoring and Compliance Review Control Type: Detective Control Frequency: Continuous monitoring, annual compliance review Risk Level: High

Control Objective

Continuously monitor deployed AI systems for prohibited practice violations and conduct periodic compliance reviews to ensure sustained adherence to Article 5 requirements.

Control Requirements

CR-005.1: Continuous Monitoring

Implement monitoring mechanisms to detect indicators of prohibited practices in deployed AI systems.

Monitoring Areas:

AreaMonitoring MethodFrequencyResponsible
AI system behaviourAutomated output monitoring and analysisContinuousAI Operations Team
User complaintsComplaint analysis for prohibited practice indicatorsContinuousCustomer Support
Third-party AI changesVendor update review for new prohibited practice risksPer updateAI Act Program Manager
Regulatory guidanceTrack new guidance and interpretations of Article 5MonthlyLegal / AI Act Program Manager
Whistleblower reportsMonitor internal reporting channelsContinuousCompliance Officer

CR-005.2: Annual Compliance Review

Conduct a comprehensive annual review of all AI systems against Article 5 prohibitions.

Annual Review Process:

StepActivityResponsibleTimeline
1Update prohibited practices register with latest guidanceAI Act Program ManagerMonth 1
2Re-screen all deployed AI systemsAI Act Program ManagerMonths 1-2
3Review all biometric systemsAI Act Program ManagerMonth 2
4Review all scoring/classification systemsAI Act Program ManagerMonth 2
5Assess third-party AI complianceAI Act Program ManagerMonth 3
6Compile findings and reportAI Act Program ManagerMonth 3
7Present to AI Governance CommitteeAI Act Program ManagerMonth 3
8Implement corrective actionsRelevant system ownersMonths 3-4

CR-005.3: Incident Response for Prohibited Practice Discoveries

If a prohibited practice is discovered in a deployed system, take immediate action.

Incident Response Steps:

StepActionTimelineResponsible
1Immediately suspend the AI systemWithin 1 hourAI Operations Team
2Notify AI Act Program Manager and LegalWithin 2 hoursAI Operations Team
3Notify AI Governance CommitteeWithin 4 hoursAI Act Program Manager
4Conduct root cause investigationWithin 5 business daysInvestigation Team
5Determine regulatory notification obligationsWithin 5 business daysLegal
6Implement corrective actionsPer investigation findingsSystem Owner
7Verify remediation before any re-deploymentBefore re-deploymentAI Act Program Manager

Mandatory Actions:

  • Implement monitoring mechanisms for detecting prohibited practice indicators
  • Conduct annual comprehensive compliance reviews of all AI systems against Article 5
  • Report monitoring findings to AI Governance Committee quarterly
  • Investigate and document any suspected prohibited practice violation
  • Maintain incident response procedures for prohibited practice discoveries
  • Track regulatory guidance updates and evolving interpretations of Article 5

Evidence Required:

  • Monitoring logs and dashboards
  • Annual compliance review reports
  • AI Governance Committee minutes and reports
  • Investigation records
  • Incident response records
  • Regulatory tracking logs

Audit Verification:

  • Verify continuous monitoring is operational
  • Confirm annual compliance review completed
  • Check investigation records for any suspected violations
  • Validate AI Governance Committee received quarterly reports
  • Verify regulatory guidance tracking is current

SUPPORTING PROCEDURES

This standard is implemented through the following detailed procedures:

Procedure PROC-AI-PROH-001: Prohibited Practice Screening Procedure

Purpose: Define step-by-step process for screening AI systems against Article 5 prohibited practices Owner: AI Act Program Manager Implements: Controls PROH-001, PROH-002, PROH-003, PROH-004

Procedure Steps:

  1. Receive AI system for screening (new or change request)
  2. Complete prohibited practice screening checklist
  3. Assess against each Article 5(1)(a)-(h) prohibition
  4. Document screening results
  5. Escalate if potential violation identified
  6. Obtain sign-off for deployment (if passed)

Outputs:

  • Completed screening checklists
  • Screening results documentation
  • Escalation records (where applicable)
  • Deployment approval records

Procedure PROC-AI-PROH-002: Prohibited Practice Monitoring and Review Procedure

Purpose: Define process for ongoing monitoring and annual compliance review Owner: AI Act Program Manager Implements: Control PROH-005

Procedure Steps:

  1. Configure and maintain monitoring mechanisms
  2. Review monitoring outputs weekly
  3. Investigate alerts and anomalies
  4. Conduct annual compliance review per CR-005.2
  5. Compile and present findings to AI Governance Committee
  6. Track and implement corrective actions

Outputs:

  • Monitoring reports
  • Investigation records
  • Annual compliance review report
  • Corrective action tracking

COMPLIANCE

5.1 Compliance Monitoring

Monitoring Approach: Continuous automated monitoring supplemented by monthly manual reviews and quarterly comprehensive assessments, with annual full compliance review.

Compliance Metrics:

MetricTargetMeasurement MethodFrequencyOwner
Prohibited Practice Screening Rate100%% of AI systems screened before deploymentQuarterlyAI Act Program Manager
Prohibited Practice Incident Rate0Count of prohibited practice incidentsQuarterlyAI Act Program Manager
Compliance Review Completion100%% of annual reviews completed on timeAnnuallyAI Act Program Manager
Biometric System Inventory Coverage100%% of biometric systems inventoriedQuarterlyAI Act Program Manager
Monitoring System Uptime≥99%% of time monitoring systems operationalMonthlyAI Operations Team

Monitoring Tools:

  • AI System Inventory and Screening Registry
  • Prohibited Practices Monitoring Dashboard
  • Compliance Reports
  • Monthly compliance reports
  • Quarterly AI Governance Committee reviews

5.2 Internal Audit Requirements

Audit Frequency: Annually (minimum); ad hoc following any suspected violation

Audit Scope:

  • Prohibited practice screening completeness and quality
  • Biometric system inventory accuracy
  • Social scoring and predictive policing control effectiveness
  • Subliminal/manipulative technique assessment adequacy
  • Monitoring system effectiveness
  • Controls effectiveness (PROH-001 through PROH-005)

Audit Activities:

  • Review 100% of AI system screening records
  • Verify biometric system inventory against actual deployed systems
  • Test monitoring system detection capabilities
  • Review escalation and incident records
  • Interview key personnel on screening procedures

Audit Outputs:

  • Annual Prohibited Practices Compliance Audit Report
  • Findings and recommendations
  • Corrective action plans for deficiencies

5.3 External Audit / Regulatory Inspection

Preparation:

  • Maintain audit-ready prohibited practices documentation at all times
  • Designate AI Act Program Manager and Legal as regulatory liaisons
  • Prepare standard response procedures for authority requests

Provide to Auditors/Regulators:

  • Prohibited practices register
  • AI system screening records
  • Biometric system inventory
  • Monitoring logs and reports
  • Compliance review reports
  • Internal audit reports
  • Evidence of controls execution

Authority Request Response:

  • Acknowledge request within 1 business day
  • Provide requested documentation within 5 business days
  • Coordinate through Legal and AI Act Program Manager
  • Document all interactions with authorities

Regulatory Penalty Context: Non-compliance with Article 5 prohibited practices carries the highest penalty tier under the EU AI Act: administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover of the preceding financial year, whichever is higher. This underscores the critical importance of maintaining complete and auditable compliance documentation.


ROLES AND RESPONSIBILITIES

6.1 RACI Matrix

ActivityAI Act Program ManagerLegalAI Operations TeamAI System OwnersAI Governance Committee
Prohibited Practice ScreeningR/ACCRI
Subliminal/Manipulative AssessmentRCCRI
Biometric System InventoryR/ACRRI
Social Scoring/Predictive Policing ReviewR/ARCCI
Ongoing MonitoringAIRCI
Annual Compliance ReviewR/ACRCA
Incident ResponseRRRCA
Regulatory EngagementCR/AIIA

RACI Legend:

  • R = Responsible (does the work)
  • A = Accountable (ultimately answerable)
  • C = Consulted (provides input)
  • I = Informed (kept up-to-date)

6.2 Role Descriptions

AI Act Program Manager

  • Primary Responsibility: Owns the prohibited practices compliance framework, conducts screenings, and coordinates compliance reviews
  • Key Activities:
    • Maintains prohibited practices register
    • Conducts and oversees pre-deployment screening
    • Leads annual compliance reviews
    • Reports to AI Governance Committee
    • Coordinates incident response for prohibited practice discoveries
  • Required Competencies: EU AI Act Article 5 expertise, AI risk assessment, compliance management

Legal

  • Primary Responsibility: Provides legal interpretation of Article 5, supports regulatory engagement
  • Key Activities:
    • Advises on Article 5 interpretation and application
    • Reviews escalated screening outcomes
    • Manages regulatory authority engagement
    • Tracks evolving case law and guidance
  • Required Competencies: EU AI Act legal expertise, data protection law, regulatory affairs

AI Operations Team

  • Primary Responsibility: Implements monitoring mechanisms, supports screening, executes incident response
  • Key Activities:
    • Deploys and maintains monitoring systems
    • Supports technical screening assessments
    • Executes system suspension in incident response
    • Maintains biometric system inventory
  • Required Competencies: AI system operations, monitoring tools, incident response

AI System Owners

  • Primary Responsibility: Ensure their AI systems comply with Article 5, participate in screening
  • Key Activities:
    • Submit AI systems for screening
    • Provide system documentation for assessment
    • Implement corrective actions
    • Report suspected prohibited practice indicators
  • Required Competencies: Understanding of their AI system functionality, Article 5 awareness

AI Governance Committee

  • Primary Responsibility: Oversight and accountability for prohibited practices compliance
  • Key Activities:
    • Reviews quarterly compliance reports
    • Approves deployment of systems with elevated screening outcomes
    • Oversees incident response for critical prohibited practice discoveries
    • Approves corrective action plans
  • Required Competencies: AI governance, strategic risk management, EU AI Act oversight

EXCEPTIONS

7.1 Exception Philosophy

Prohibited AI practices under Article 5 are absolute prohibitions established by EU law. The organisation's ability to grant exceptions is extremely limited and applies only to process-related aspects, never to the substantive prohibitions themselves.


7.2 Allowed Exceptions

The following process-related exceptions may be granted with proper justification and approval:

Exception TypeJustification RequiredMaximum DurationApproval AuthorityCompensating Controls
Extended Screening TimelineTechnical complexity requires additional analysis time15 business daysAI Act Program ManagerSystem not deployed until screening complete
Alternative Screening MethodStandard screening method not suitable for system typePermanentAI Act Program Manager + LegalDocument rationale; Verify equivalent rigour

7.3 Prohibited Exceptions

The following exceptions cannot be granted under any circumstances:

  • Deploying a system identified as a prohibited practice -- Article 5 prohibitions are absolute; no business justification can override them
  • Skipping prohibited practice screening -- All AI systems must be screened; no exceptions
  • Waiving biometric system inventory requirements -- All biometric AI systems must be inventoried and assessed
  • Exempting third-party AI from screening -- Third-party AI systems must be screened equally
  • Delaying incident response for a discovered prohibited practice -- Immediate suspension is mandatory

7.4 Exception Request Process

Step 1: Submit Exception Request

  • Complete Exception Request Form (FORM-AI-EXCEPTION-001)
  • Include business justification (process exception only)
  • Propose compensating controls
  • Specify duration requested
  • Attach risk assessment

Step 2: Risk Assessment

  • AI Act Program Manager assesses risk of granting process exception
  • Legal reviews to confirm exception does not compromise Article 5 compliance
  • Documents residual risk

Step 3: Approval

  • Route to appropriate approval authority based on exception type
  • AI Act Program Manager approval: Minor process exceptions
  • AI Act Program Manager + Legal: Significant process exceptions
  • AI Governance Committee: Any exception that could affect compliance posture

Step 4: Documentation and Monitoring

  • Document exception in Exception Register
  • Assign exception owner
  • Set review date
  • Monitor compensating controls
  • Report exceptions quarterly to AI Governance Committee

Step 5: Exception Review and Closure

  • Review exception at specified review date
  • Assess if exception is still needed
  • Close exception when standard process resumes
  • Document lessons learned

ENFORCEMENT

8.1 Non-Compliance Consequences

ViolationSeverityConsequenceRemediation Required
Deploying a prohibited AI systemCriticalImmediate system suspension; Executive escalation; Potential regulatory notificationRemove system; Root cause analysis; Regulatory engagement
Failing to screen AI system before deploymentCriticalSystem suspension until screening completed; Formal investigationComplete screening immediately; Disciplinary review
Incomplete biometric system inventoryHighEscalation to AI Governance CommitteeComplete inventory within 10 business days
Failure to conduct annual compliance reviewHighEscalation to AI Governance CommitteeComplete review within 15 business days
Delayed incident responseHighFormal investigationImmediate corrective action; Process improvement
Incomplete screening documentationMediumWritten warning; Corrective action requiredComplete documentation within 5 business days

8.2 Escalation Procedures

Level 1: AI Act Program Manager

  • Minor documentation deficiencies
  • Screening delays < 3 days
  • Action: Written warning, corrective action required

Level 2: AI Act Program Manager + Legal

  • Repeated screening failures
  • Potential prohibited practice indicators identified
  • Action: Formal review, corrective action plan, AI Governance Committee notification

Level 3: AI Governance Committee

  • Confirmed or suspected prohibited practice in deployed system
  • Systemic screening failures
  • Action: Immediate system suspension, investigation, management notification

Level 4: Executive Management + Legal

  • Confirmed prohibited practice violation with regulatory exposure
  • Regulatory inquiry or enforcement action
  • Significant legal or reputational risk
  • Action: Executive crisis management, legal strategy, regulatory engagement, consider voluntary self-reporting

8.3 Immediate Escalation Triggers

Escalate immediately to AI Governance Committee + Legal if:

  • A deployed AI system is identified as potentially engaging in a prohibited practice
  • A regulatory authority contacts the organisation regarding Article 5 compliance
  • A whistleblower report alleges a prohibited practice
  • A third-party AI provider is found to have violated Article 5
  • Media reporting identifies a potential prohibited practice in the organisation's AI systems

8.4 Disciplinary Actions

Individuals responsible for prohibited practice violations may be subject to:

  • Verbal or written warning
  • Mandatory retraining on Article 5 requirements
  • Performance improvement plan
  • Reassignment of responsibilities
  • Suspension (with pay during investigation)
  • Termination (for knowingly deploying a prohibited AI system or deliberately bypassing screening)

Factors Considered:

  • Intent (knowing violation vs. honest mistake)
  • Severity of violation
  • Impact (actual or potential, including fundamental rights impact)
  • Cooperation with remediation and investigation
  • Prior violation history

KEY PERFORMANCE INDICATORS (KPIs)

9.1 Prohibited Practices KPIs

KPI IDKPI NameDefinitionTargetMeasurement MethodFrequencyOwnerReporting To
KPI-PROH-001Prohibited Practice Screening Rate% of AI systems screened for prohibited practices before deployment100%(# screened / # total) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-PROH-002Prohibited Practice Incident RateNumber of prohibited practice incidents detected0Count of incidentsQuarterlyAI Act Program ManagerAI Governance Committee
KPI-PROH-003Compliance Review Completion% of annual compliance reviews completed on time100%(# completed on time / # total) x 100AnnuallyAI Act Program ManagerAI Governance Committee

9.2 KPI Dashboards and Reporting

Real-Time Dashboard (AI Act Program Manager access)

  • Current screening status of all AI systems
  • Biometric system inventory status
  • Monitoring alert status
  • Open investigations

Monthly Management Report

  • KPI-PROH-001, KPI-PROH-002
  • Screening activity summary
  • Monitoring findings summary
  • Issues and risks

Quarterly AI Governance Committee Report

  • All KPIs
  • Screening outcome summary
  • Monitoring findings and actions
  • Internal audit findings (if conducted)
  • Exception register review
  • Regulatory guidance updates

Annual Executive Report

  • Full-year KPI performance
  • Annual compliance review findings
  • Prohibited practices compliance maturity assessment
  • Strategic recommendations
  • Regulatory outlook and emerging risks

9.3 KPI Thresholds and Alerts

KPIGreen (Good)Yellow (Warning)Red (Critical)Alert Action
Screening Rate100%95-99%< 95%Red: Immediate escalation to AI Governance Committee Chair
Incident Rate01 (suspected, under investigation)≥1 (confirmed)Red: Immediate escalation to AI Governance Committee + Legal + Executive Management
Compliance Review Completion100%On track but delayedOverdue by > 30 daysRed: Escalation to AI Governance Committee

TRAINING REQUIREMENTS

10.1 Training Program Overview

All personnel involved in AI system development, deployment, procurement, or oversight must complete training on Article 5 prohibited practices to ensure they can identify and prevent prohibited AI practices.


10.2 Role-Based Training Requirements

RoleTraining CourseDurationContentFrequencyAssessment Required
AI Act Program ManagerProhibited Practices Expert Training8 hoursAll Article 5 prohibitions in depth; Screening methodology; Incident response; Regulatory engagementInitial + annuallyYes - Written exam (>=90%)
LegalProhibited Practices Legal Training8 hoursArticle 5 legal interpretation; Case law; Enforcement; Regulatory engagementInitial + annuallyYes - Written exam (>=90%)
AI Operations TeamProhibited Practices Operational Training4 hoursArticle 5 overview; Monitoring implementation; Incident response proceduresInitial + annuallyYes - Knowledge check (>=80%)
AI System OwnersProhibited Practices Awareness Training4 hoursArticle 5 overview; Screening process; Reporting obligationsInitial + annuallyYes - Knowledge check (>=80%)
All StaffAI Prohibited Practices Awareness1 hourArticle 5 overview; How to recognise and report concernsAt onboarding + annuallyYes - Knowledge check (>=80%)

10.3 Training Content by Topic

Article 5 Prohibited Practices

  • Complete overview of all eight prohibited practice categories
  • Real-world examples and case studies for each prohibition
  • How to identify indicators of prohibited practices
  • Screening process and methodology

Biometric AI Compliance

  • Biometric data processing under the AI Act
  • Prohibited biometric uses vs. permitted uses
  • Emotion inference boundaries
  • Biometric categorisation rules

Social Scoring and Profiling

  • What constitutes social scoring under Article 5(1)(c)
  • Predictive policing boundaries under Article 5(1)(d)
  • Compliant vs. non-compliant scoring and classification approaches

Incident Response

  • How to report suspected prohibited practices
  • Incident response timeline and responsibilities
  • Regulatory notification obligations

10.4 Training Delivery Methods

Initial Training:

  • Instructor-led classroom or virtual training
  • Includes real-world case studies and scenario exercises
  • Hands-on practice with screening checklists
  • Group discussions of borderline cases

Annual Refresher:

  • E-learning modules for core content review
  • Live update sessions for new regulatory guidance and case law
  • Case study reviews of recent screening activities
  • Knowledge assessment

On-the-Job Training:

  • Mentoring for new screening personnel
  • Supervised screening for first 5 AI systems
  • Job shadowing during compliance reviews

Just-in-Time Training:

  • Quick reference guides for each Article 5 prohibition
  • Screening checklist guides
  • Help desk support from AI Act Program Manager

10.5 Training Effectiveness Measurement

Assessment Methods:

  • Written exams for knowledge retention
  • Scenario-based exercises for practical application
  • On-the-job observations during screening
  • Feedback surveys for training quality

Competency Validation:

  • Screening personnel: Must demonstrate ability to correctly screen 3 AI systems (including 1 borderline case) before independent screening
  • All staff: Must pass knowledge assessments with minimum required scores

Training Metrics:

MetricTargetFrequency
Training completion rate100%Quarterly
Assessment pass rate (first attempt)>= 90%Per training
Training effectiveness score (survey)>= 4.0/5.0Per training
Time to competency (screening personnel)< 30 daysPer person

10.6 Training Records

Records Maintained:

  • Training attendance records
  • Assessment scores
  • Competency validations
  • Refresher training completion
  • Individual training transcripts

Retention: 10 years (to align with EU AI Act documentation retention)

Access: AI Act Program Manager, HR, Internal Audit, Competent Authorities (upon request)


DEFINITIONS

TermDefinitionSource
Prohibited AI PracticeAn AI practice that is banned under Article 5 of the EU AI Act due to its unacceptable risk to fundamental rightsEU AI Act Article 5
Subliminal TechniqueA technique that deploys components below the threshold of conscious awareness to materially distort behaviourEU AI Act Article 5(1)(a)
Social ScoringEvaluating or classifying natural persons based on their social behaviour or known, inferred, or predicted personal characteristics, where the resulting score leads to detrimental treatmentEU AI Act Article 5(1)(c)
Biometric CategorisationUsing biometric data to categorise natural persons according to specific categories such as race, political opinions, or religious beliefsEU AI Act Article 5(1)(g)
Real-Time Remote Biometric IdentificationUsing AI to identify natural persons at a distance in real time in publicly accessible spaces, typically through facial recognitionEU AI Act Article 5(1)(h)
Emotion InferenceUsing AI to infer the emotional state of a natural person based on biometric data or behavioural indicatorsEU AI Act Article 5(1)(f)
Predictive PolicingUsing AI to assess the risk that a specific natural person will commit a criminal offence, based on profiling or personality traitsEU AI Act Article 5(1)(d)
ScreeningThe process of assessing an AI system against Article 5 prohibited practices before deploymentThis Standard

LINK WITH AI ACT AND ISO42001

12.1 EU AI Act Regulatory Mapping

This standard implements the following EU AI Act requirements:

EU AI Act ProvisionArticleRequirement SummaryImplemented By (Controls)
Prohibited practices - subliminal techniquesArticle 5(1)(a)Prohibition on AI deploying subliminal techniques beyond consciousnessPROH-001, PROH-002
Prohibited practices - exploitation of vulnerabilitiesArticle 5(1)(b)Prohibition on AI exploiting vulnerabilities due to age, disability, or situationPROH-001, PROH-002
Prohibited practices - social scoringArticle 5(1)(c)Prohibition on social scoring leading to detrimental treatmentPROH-001, PROH-004
Prohibited practices - predictive policingArticle 5(1)(d)Prohibition on profiling-only predictive policingPROH-001, PROH-004
Prohibited practices - facial recognition scrapingArticle 5(1)(e)Prohibition on untargeted facial recognition database buildingPROH-001, PROH-003
Prohibited practices - emotion inferenceArticle 5(1)(f)Prohibition on emotion inference in workplace/educationPROH-001, PROH-003
Prohibited practices - biometric categorisationArticle 5(1)(g)Prohibition on biometric categorisation by protected characteristicsPROH-001, PROH-003
Prohibited practices - real-time biometric IDArticle 5(1)(h)Prohibition on real-time remote biometric identification for law enforcementPROH-001, PROH-003
Ongoing complianceArticle 5 (general)Ongoing obligation to ensure no prohibited practice is deployedPROH-005

12.2 ISO/IEC 42001:2023 Alignment

This standard aligns with ISO/IEC 42001:2023 as follows:

ISO 42001 ClauseRequirementImplementation in This Standard
Clause 6.1: Actions to address risksIdentify and address risks including compliance risksPROH-001 (screening), PROH-005 (monitoring)
Clause 8.1: Operational planning and controlPlan and control processes to meet requirementsPROH-001 through PROH-004 (preventive controls)
Clause 9.1: Monitoring, measurement, analysisMonitor and measure AI management system performancePROH-005 (ongoing monitoring)
Clause 10.1: Nonconformity and corrective actionAddress nonconformities and take corrective actionPROH-005 (incident response)

12.3 Relationship to Other Standards

This prohibited practices standard integrates with other AI Act standards:

Related StandardIntegration PointRationale
STD-AI-001: ClassificationClassification must include prohibited practice screeningSystems must be screened for prohibited practices as part of classification
STD-AI-002: Risk ManagementProhibited practices represent unacceptable risk levelRisk management framework must identify and prevent prohibited practices
STD-AI-003: Data GovernanceData used in biometric and scoring systems must be governedBiometric data and scoring data require specific governance controls
STD-AI-006: TransparencyProhibited practice screening results inform transparency obligationsScreening documentation supports transparency requirements
STD-AI-007: Human OversightHuman oversight required for borderline casesHuman review essential for systems near prohibited practice boundaries
STD-AI-013: Incident ManagementProhibited practice discoveries are critical incidentsIncident management procedures must cover prohibited practice discoveries
STD-AI-014: Literacy and TrainingStaff must be trained on prohibited practicesTraining curriculum must include Article 5 prohibited practices

12.4 References and Related Documents

EU AI Act (Regulation (EU) 2024/1689):

  • Article 5: Prohibited AI Practices
  • Article 5(1)(a): Subliminal techniques
  • Article 5(1)(b): Exploitation of vulnerabilities
  • Article 5(1)(c): Social scoring
  • Article 5(1)(d): Predictive policing
  • Article 5(1)(e): Untargeted facial recognition scraping
  • Article 5(1)(f): Emotion inference in workplace/education
  • Article 5(1)(g): Biometric categorisation by protected characteristics
  • Article 5(1)(h): Real-time remote biometric identification
  • Article 99(2): Penalties for prohibited practices (EUR 35 million or 7% of global turnover)
  • Recitals 28-45: Explanatory context for prohibited practices

ISO/IEC Standards:

  • ISO/IEC 42001:2023: Information technology -- Artificial intelligence -- Management system

Internal Documents:

  • POL-AI-001: Artificial Intelligence Policy (parent policy)
  • STD-AI-001: AI System Classification Standard
  • STD-AI-002: AI Risk Management Standard
  • STD-AI-003: AI Data Governance Standard
  • STD-AI-006: AI Transparency Standard
  • STD-AI-007: AI Human Oversight Standard
  • STD-AI-013: AI Incident Management Standard
  • STD-AI-014: AI Literacy and Training Standard
  • PROC-AI-PROH-001, -002: Prohibited practices procedures

APPROVAL AND AUTHORIZATION

RoleNameTitleSignatureDate
Prepared BySarah JohnsonAI Act Program Manager_________________________
Reviewed ByLegal CounselLegal Director_________________________
Reviewed ByJane DoeChief Strategy & Risk Officer_________________________
Approved ByJane DoeAI Governance Committee Chair_________________________

Effective Date: 2025-02-02 Next Review Date: 2026-02-02 Review Frequency: Annually or upon regulatory change


END OF STANDARD STD-AI-015


This standard is a living document. Feedback and improvement suggestions should be directed to the AI Act Program Manager.

Standard Details

Standard ID

STD-AI-015

Version

1.0

Status

draft

Owner

AI Act Program Manager

Effective Date

2025-02-02

Applicability

All AI systems

EU AI Act References
Article 5