Prohibited AI Practices Standard
Identify, prevent, and monitor prohibited AI practices under Article 5 of the EU AI Act.
5
Controls
0
Compliant
0
In Progress
5
Not Started
Prohibited Practice Identification and Screening
Screen all AI systems against Art. 5 prohibited practices before deployment
Subliminal/Manipulative Technique Prevention
Ensure no AI system deploys subliminal or manipulative techniques
Biometric and Emotion Recognition Controls
Ensure compliance with biometric and emotion recognition prohibitions
Social Scoring and Predictive Policing Prevention
Ensure no AI system performs social scoring or prohibited predictive policing
Ongoing Monitoring and Compliance Review
Continuously monitor deployed AI systems for prohibited practice violations
Prohibited AI Practices Standard
Document Type: Standard Standard ID: STD-AI-015 Standard Title: Prohibited AI Practices Standard Version: 1.0 Effective Date: 2025-02-02 Next Review Date: 2026-02-02 Review Frequency: Annually or upon regulatory change Parent Policy: POL-AI-001 - Artificial Intelligence Policy Owner: AI Act Program Manager Approved By: AI Governance Committee Chair Status: Draft Classification: Internal Use Only
TABLE OF CONTENTS
- Document History
- Objective
- Scope and Applicability
- Control Standard
- Supporting Procedures
- Compliance
- Roles and Responsibilities
- Exceptions
- Enforcement
- Key Performance Indicators (KPIs)
- Training Requirements
- Definitions
- Link with AI Act and ISO42001
DOCUMENT HISTORY
| Version | Date | Author | Changes | Approval Date | Approved By |
|---|---|---|---|---|---|
| 0.1 | 2025-01-10 | AI Act Program Manager | Initial draft | - | - |
| 0.2 | 2025-01-20 | AI Act Program Manager | Added Article 5 subsection mapping | - | - |
| 0.3 | 2025-01-28 | AI Act Program Manager | Incorporated legal review and stakeholder feedback | - | - |
| 1.0 | 2025-02-02 | AI Act Program Manager | Final version approved - GRC restructured | 2025-02-01 | Jane Doe, AI Governance Committee Chair |
OBJECTIVE
This standard defines requirements for identifying, preventing, and monitoring prohibited AI practices under EU AI Act Article 5. The prohibited practices provisions took effect on 2 February 2025, making compliance immediately mandatory.
Primary Goals:
- Identify and screen all AI systems against Article 5 prohibited practices before deployment
- Prevent the deployment of AI systems that use subliminal, manipulative, or deceptive techniques
- Ensure compliance with biometric and emotion recognition prohibitions
- Prevent social scoring and profiling-only predictive policing systems
- Continuously monitor deployed AI systems for prohibited practice violations
Regulatory Context:
Article 5 of the EU AI Act establishes an absolute prohibition on certain AI practices deemed unacceptable due to their potential to violate fundamental rights. Violations carry the highest penalty tier under the AI Act: administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. These prohibitions apply to all organisations regardless of role (provider, deployer, importer, distributor) and cannot be mitigated or managed -- they must be prevented entirely.
SCOPE AND APPLICABILITY
2.1 Mandatory Applicability
This standard is mandatory for:
- All AI systems developed, deployed, or distributed by the organisation
- All AI systems procured from third-party providers
- All AI system components and subsystems that interact with natural persons
- All biometric data processing systems using AI
- All AI-driven scoring, rating, or classification systems applied to natural persons
2.2 Recommended Applicability
This standard is recommended for:
- Non-AI automated decision-making systems (to prevent drift into prohibited territory)
- AI research and development activities (to embed compliance from design phase)
- Third-party AI integrations and APIs consumed by the organisation
2.3 Prohibited Practices Covered
This standard addresses all eight categories of prohibited AI practices under Article 5(1):
| Reference | Prohibited Practice | Summary |
|---|---|---|
| Art. 5(1)(a) | Subliminal techniques | AI deploying subliminal techniques beyond consciousness to materially distort behaviour |
| Art. 5(1)(b) | Exploitation of vulnerabilities | AI exploiting vulnerabilities due to age, disability, or social/economic situation |
| Art. 5(1)(c) | Social scoring | AI evaluating/classifying persons based on social behaviour leading to detrimental treatment |
| Art. 5(1)(d) | Predictive policing (profiling-only) | AI assessing individual risk of criminal offence solely based on profiling or personality traits |
| Art. 5(1)(e) | Untargeted facial recognition scraping | Creating/expanding facial recognition databases through untargeted scraping |
| Art. 5(1)(f) | Emotion inference in workplace/education | Inferring emotions in workplace and educational institutions (except medical/safety) |
| Art. 5(1)(g) | Biometric categorisation (protected characteristics) | Categorising persons by race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation |
| Art. 5(1)(h) | Real-time remote biometric identification | Real-time remote biometric identification in publicly accessible spaces for law enforcement (subject to narrow exceptions) |
2.4 Out of Scope
- AI systems used exclusively outside the EU (unless their output is used within the EU)
- Non-AI biometric systems (covered by GDPR and national data protection law)
- AI systems that have been decommissioned and are no longer operational
CONTROL STANDARD
Control PROH-001: Prohibited Practice Identification and Screening
Control ID: PROH-001 Control Name: Prohibited Practice Identification and Screening Control Type: Preventive Control Frequency: Before each AI system deployment, quarterly review Risk Level: High
Control Objective
Screen all AI systems against Article 5 prohibited practices before deployment to ensure no prohibited AI practice is introduced into the organisation's operations.
Control Requirements
CR-001.1: Prohibited Practices Register
Maintain a comprehensive register of all Article 5 prohibited practices, updated as regulatory guidance evolves.
Register Contents:
| Field | Description | Example |
|---|---|---|
| Prohibition ID | Unique identifier | PROH-ART5-1A |
| Article Reference | EU AI Act article and paragraph | Article 5(1)(a) |
| Practice Description | Plain-language description of the prohibited practice | Deploying subliminal techniques beyond consciousness |
| Indicators | Observable indicators that a system may engage in this practice | Hidden persuasion layers, sub-threshold stimuli |
| Screening Questions | Questions to ask during screening | Does the system use any technique designed to influence users below conscious awareness? |
| Last Updated | Date of last review | 2025-02-02 |
Mandatory Actions:
- Maintain a prohibited practices register aligned with Article 5
- Screen all new AI systems against prohibited practices before deployment
- Document screening results for each AI system
- Escalate potential violations to AI Governance Committee immediately
- Re-screen existing AI systems when Article 5 guidance or interpretations change
- Maintain screening templates and checklists
CR-001.2: Pre-Deployment Screening Process
Screen every AI system before deployment using a structured screening process.
Screening Steps:
| Step | Activity | Responsible | Output |
|---|---|---|---|
| 1 | Identify AI system purpose and functionality | AI System Owner | System description |
| 2 | Map system against each Article 5 prohibition | AI Act Program Manager | Screening matrix |
| 3 | Assess risk indicators for each prohibition | AI Act Program Manager | Risk assessment |
| 4 | Document screening outcome (pass/fail/escalate) | AI Act Program Manager | Screening record |
| 5 | Obtain sign-off for deployment | AI Governance Committee | Approval record |
Screening Outcomes:
| Outcome | Definition | Action Required |
|---|---|---|
| Pass | No prohibited practice indicators identified | Proceed to deployment |
| Escalate | Potential prohibited practice indicators require further analysis | Refer to Legal and AI Governance Committee |
| Fail | Prohibited practice identified | Halt deployment immediately; do not deploy |
Evidence Required:
- Prohibited practices register
- Screening records and results
- Escalation records
- Screening templates and checklists
- AI system inventory with screening status
Audit Verification:
- Verify prohibited practices register is maintained and current
- Confirm all AI systems screened before deployment
- Check screening documentation is complete for each system
- Validate escalation procedures followed where applicable
- Verify 100% screening coverage
Control PROH-002: Subliminal/Manipulative Technique Prevention
Control ID: PROH-002 Control Name: Subliminal and Manipulative Technique Prevention Control Type: Preventive Control Frequency: Before each AI system deployment, annual review Risk Level: Critical
Control Objective
Ensure no AI system deploys subliminal techniques beyond a person's consciousness or manipulative or deceptive techniques that materially distort behaviour, in compliance with Article 5(1)(a) and Article 5(1)(b).
Control Requirements
CR-002.1: Subliminal Technique Assessment (Article 5(1)(a))
Assess all AI systems for subliminal techniques that operate below conscious awareness.
Subliminal Technique Indicators:
| Indicator | Description | Detection Method |
|---|---|---|
| Sub-threshold stimuli | Visual, auditory, or other stimuli below perception threshold | Technical review of output modalities |
| Hidden persuasion layers | Embedded persuasion mechanisms not apparent to user | Architecture review and output analysis |
| Unconscious behavioural nudging | Techniques designed to influence without awareness | Behavioural analysis of system interactions |
| Covert data-driven personalisation | Personalisation exploiting unconscious biases | Algorithm review and A/B testing analysis |
Mandatory Actions:
- Assess all user-facing AI systems for subliminal technique risk
- Review AI system architecture for hidden influence mechanisms
- Test AI outputs for sub-threshold or imperceptible influence patterns
- Document assessment findings and design decisions
- Prohibit deployment of any system with identified subliminal techniques
CR-002.2: Manipulative and Deceptive Technique Assessment (Article 5(1)(b))
Assess all AI systems for manipulative or deceptive techniques that exploit vulnerabilities of specific groups.
Vulnerability Exploitation Indicators:
| Vulnerability Group | Examples | Prohibited Exploitation |
|---|---|---|
| Age-related | Children, elderly persons | Exploiting limited understanding or cognitive decline |
| Disability-related | Persons with cognitive, physical, or sensory disabilities | Exploiting reduced capacity to understand or resist |
| Social/economic situation | Persons in financial distress, social isolation | Exploiting desperation or limited alternatives |
Mandatory Actions:
- Identify whether AI system interacts with vulnerable groups
- Assess persuasion mechanisms for exploitative characteristics
- Test for disproportionate impact on vulnerable users
- Review AI-generated content for deceptive characteristics
- Document vulnerability impact assessments
Evidence Required:
- Manipulation risk assessments
- Design review records
- Testing results for influence patterns
- AI system design documentation
- Content review records
- Vulnerability impact assessments
Audit Verification:
- Verify manipulation risk assessments conducted for all user-facing AI
- Confirm subliminal technique testing performed
- Check vulnerability impact assessments documented
- Validate no systems deployed with identified prohibited techniques
Control PROH-003: Biometric and Emotion Recognition Controls
Control ID: PROH-003 Control Name: Biometric and Emotion Recognition Compliance Controls Control Type: Preventive Control Frequency: Before each AI system deployment, annual review Risk Level: Critical
Control Objective
Ensure compliance with prohibitions on untargeted facial recognition scraping (Article 5(1)(e)), emotion inference in workplace and education (Article 5(1)(f)), biometric categorisation by protected characteristics (Article 5(1)(g)), and real-time remote biometric identification in public spaces for law enforcement (Article 5(1)(h)).
Control Requirements
CR-003.1: Biometric System Inventory
Maintain a comprehensive inventory of all AI systems that process biometric data.
Inventory Fields:
| Field | Description | Required |
|---|---|---|
| System ID | Unique identifier | Yes |
| System Name | Descriptive name | Yes |
| Biometric Type | Facial, voice, gait, fingerprint, etc. | Yes |
| Processing Purpose | Identification, verification, categorisation, emotion inference | Yes |
| Data Sources | Where biometric data is obtained | Yes |
| Target Population | Who is subject to biometric processing | Yes |
| Article 5 Assessment | Which Art. 5 prohibitions assessed, outcome | Yes |
| Lawful Basis | Legal basis for any permitted biometric processing | Yes |
CR-003.2: Untargeted Facial Recognition Scraping Prevention (Article 5(1)(e))
Prevent creation or expansion of facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
Mandatory Actions:
- Prohibit procurement of facial recognition databases built through untargeted scraping
- Verify data provenance for all facial recognition training data
- Audit third-party facial recognition providers for data sourcing compliance
- Contractually require Article 5(1)(e) compliance from all biometric data suppliers
CR-003.3: Emotion Inference Prohibition in Workplace/Education (Article 5(1)(f))
Prohibit AI systems that infer emotions of natural persons in workplace and educational institution settings, except where the system is intended for medical or safety reasons.
Prohibited Uses:
| Context | Prohibited Use | Permitted Exception |
|---|---|---|
| Workplace | Monitoring employee emotional states for performance, productivity, or engagement | Medical purposes (e.g., detecting fatigue in safety-critical roles) |
| Education | Monitoring student emotional states for attention, engagement, or behaviour assessment | Medical purposes (e.g., detecting distress for wellbeing) |
| Recruitment | Inferring candidate emotions during interviews | None |
CR-003.4: Biometric Categorisation by Protected Characteristics (Article 5(1)(g))
Prohibit biometric categorisation systems that individually categorise natural persons based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
Mandatory Actions:
- Review all biometric systems for categorisation by protected characteristics
- Prohibit biometric categorisation that outputs or infers protected characteristic categories
- Ensure biometric systems used for lawful purposes do not indirectly produce prohibited categorisations
- Document permitted biometric use cases and their boundaries
CR-003.5: Real-Time Remote Biometric Identification (Article 5(1)(h))
Prohibit real-time remote biometric identification systems in publicly accessible spaces for law enforcement, subject to narrow exceptions requiring prior judicial or administrative authorisation.
Note: This prohibition primarily applies to law enforcement authorities. Organisations should ensure they do not provide, supply, or facilitate such systems without appropriate legal basis and authorisation.
Evidence Required:
- Biometric system inventory
- Use case documentation and lawful basis records
- Access control records
- Prohibition enforcement records
- Data source verification records
- Third-party compliance audit records
Audit Verification:
- Verify biometric system inventory is complete and current
- Confirm each biometric system assessed against Article 5 prohibitions
- Check data provenance records for facial recognition systems
- Validate no prohibited emotion inference in workplace/education contexts
- Verify no biometric categorisation by protected characteristics
Control PROH-004: Social Scoring and Predictive Policing Prevention
Control ID: PROH-004 Control Name: Social Scoring and Predictive Policing Prevention Control Type: Preventive Control Frequency: Before each AI system deployment, annual review Risk Level: Critical
Control Objective
Ensure no AI system performs social scoring (Article 5(1)(c)) or individual risk assessment based solely on profiling or personality traits for predictive policing purposes (Article 5(1)(d)).
Control Requirements
CR-004.1: Social Scoring Prevention (Article 5(1)(c))
Prevent AI systems from evaluating or classifying natural persons or groups based on social behaviour or known, inferred, or predicted personal or personality characteristics, where the social score leads to detrimental or unfavourable treatment in social contexts unrelated to the context in which the data was generated, or treatment that is unjustified or disproportionate to the social behaviour.
Social Scoring Indicators:
| Indicator | Description | Example |
|---|---|---|
| Cross-context data aggregation | Combining data from unrelated contexts to produce a composite score | Using social media activity to determine creditworthiness |
| Generalised trustworthiness scoring | Producing a general trustworthiness or reliability score for a person | Citizen scoring systems |
| Behavioural classification leading to penalties | Classifying persons by behaviour resulting in negative treatment | Penalising persons for lawful associations or activities |
| Disproportionate treatment | Treatment that is disproportionate to the original behaviour | Denying public services based on minor social infractions |
Mandatory Actions:
- Review all scoring, rating, and classification AI systems for social scoring characteristics
- Ensure scoring systems do not aggregate data across unrelated contexts
- Verify that any AI-driven assessments of persons do not lead to unjustified or disproportionate treatment
- Prohibit generalised trustworthiness or social credit scoring
CR-004.2: Predictive Policing Prevention (Article 5(1)(d))
Prevent AI systems from making or contributing to individual risk assessments of natural persons for predicting the risk of criminal offence, based solely on profiling or on the assessment of personality traits and characteristics. This prohibition does not apply to AI systems used to support the human assessment of involvement in criminal activity based on objective and verifiable facts directly linked to criminal activity.
Assessment Criteria:
| Criterion | Compliant | Non-Compliant |
|---|---|---|
| Basis for assessment | Objective, verifiable facts linked to criminal activity | Profiling or personality traits alone |
| Human involvement | AI supports human decision-making | AI makes autonomous determinations |
| Data used | Factual evidence of specific conduct | Demographic data, personality assessments, behavioural predictions |
| Scope | Specific investigation with factual basis | General population risk screening |
Mandatory Actions:
- Review all AI systems that assess risk related to natural persons
- Verify objective factual basis for any AI-driven risk assessments
- Prohibit personality-trait-only or profiling-only assessments for criminal risk
- Ensure human oversight for any risk assessment related to criminal activity
- Document methodology and factual basis for all AI-driven person assessments
Evidence Required:
- Scoring system reviews
- Methodology documentation
- Factual basis verification records
- Classification system audits
- Assessment design documentation
Audit Verification:
- Verify all scoring/classification systems reviewed for social scoring
- Confirm no cross-context data aggregation for person scoring
- Check factual basis documented for any criminal risk assessment AI
- Validate methodology documentation is complete
- Verify no profiling-only assessments are in use
Control PROH-005: Ongoing Monitoring and Compliance Review
Control ID: PROH-005 Control Name: Ongoing Prohibited Practices Monitoring and Compliance Review Control Type: Detective Control Frequency: Continuous monitoring, annual compliance review Risk Level: High
Control Objective
Continuously monitor deployed AI systems for prohibited practice violations and conduct periodic compliance reviews to ensure sustained adherence to Article 5 requirements.
Control Requirements
CR-005.1: Continuous Monitoring
Implement monitoring mechanisms to detect indicators of prohibited practices in deployed AI systems.
Monitoring Areas:
| Area | Monitoring Method | Frequency | Responsible |
|---|---|---|---|
| AI system behaviour | Automated output monitoring and analysis | Continuous | AI Operations Team |
| User complaints | Complaint analysis for prohibited practice indicators | Continuous | Customer Support |
| Third-party AI changes | Vendor update review for new prohibited practice risks | Per update | AI Act Program Manager |
| Regulatory guidance | Track new guidance and interpretations of Article 5 | Monthly | Legal / AI Act Program Manager |
| Whistleblower reports | Monitor internal reporting channels | Continuous | Compliance Officer |
CR-005.2: Annual Compliance Review
Conduct a comprehensive annual review of all AI systems against Article 5 prohibitions.
Annual Review Process:
| Step | Activity | Responsible | Timeline |
|---|---|---|---|
| 1 | Update prohibited practices register with latest guidance | AI Act Program Manager | Month 1 |
| 2 | Re-screen all deployed AI systems | AI Act Program Manager | Months 1-2 |
| 3 | Review all biometric systems | AI Act Program Manager | Month 2 |
| 4 | Review all scoring/classification systems | AI Act Program Manager | Month 2 |
| 5 | Assess third-party AI compliance | AI Act Program Manager | Month 3 |
| 6 | Compile findings and report | AI Act Program Manager | Month 3 |
| 7 | Present to AI Governance Committee | AI Act Program Manager | Month 3 |
| 8 | Implement corrective actions | Relevant system owners | Months 3-4 |
CR-005.3: Incident Response for Prohibited Practice Discoveries
If a prohibited practice is discovered in a deployed system, take immediate action.
Incident Response Steps:
| Step | Action | Timeline | Responsible |
|---|---|---|---|
| 1 | Immediately suspend the AI system | Within 1 hour | AI Operations Team |
| 2 | Notify AI Act Program Manager and Legal | Within 2 hours | AI Operations Team |
| 3 | Notify AI Governance Committee | Within 4 hours | AI Act Program Manager |
| 4 | Conduct root cause investigation | Within 5 business days | Investigation Team |
| 5 | Determine regulatory notification obligations | Within 5 business days | Legal |
| 6 | Implement corrective actions | Per investigation findings | System Owner |
| 7 | Verify remediation before any re-deployment | Before re-deployment | AI Act Program Manager |
Mandatory Actions:
- Implement monitoring mechanisms for detecting prohibited practice indicators
- Conduct annual comprehensive compliance reviews of all AI systems against Article 5
- Report monitoring findings to AI Governance Committee quarterly
- Investigate and document any suspected prohibited practice violation
- Maintain incident response procedures for prohibited practice discoveries
- Track regulatory guidance updates and evolving interpretations of Article 5
Evidence Required:
- Monitoring logs and dashboards
- Annual compliance review reports
- AI Governance Committee minutes and reports
- Investigation records
- Incident response records
- Regulatory tracking logs
Audit Verification:
- Verify continuous monitoring is operational
- Confirm annual compliance review completed
- Check investigation records for any suspected violations
- Validate AI Governance Committee received quarterly reports
- Verify regulatory guidance tracking is current
SUPPORTING PROCEDURES
This standard is implemented through the following detailed procedures:
Procedure PROC-AI-PROH-001: Prohibited Practice Screening Procedure
Purpose: Define step-by-step process for screening AI systems against Article 5 prohibited practices Owner: AI Act Program Manager Implements: Controls PROH-001, PROH-002, PROH-003, PROH-004
Procedure Steps:
- Receive AI system for screening (new or change request)
- Complete prohibited practice screening checklist
- Assess against each Article 5(1)(a)-(h) prohibition
- Document screening results
- Escalate if potential violation identified
- Obtain sign-off for deployment (if passed)
Outputs:
- Completed screening checklists
- Screening results documentation
- Escalation records (where applicable)
- Deployment approval records
Procedure PROC-AI-PROH-002: Prohibited Practice Monitoring and Review Procedure
Purpose: Define process for ongoing monitoring and annual compliance review Owner: AI Act Program Manager Implements: Control PROH-005
Procedure Steps:
- Configure and maintain monitoring mechanisms
- Review monitoring outputs weekly
- Investigate alerts and anomalies
- Conduct annual compliance review per CR-005.2
- Compile and present findings to AI Governance Committee
- Track and implement corrective actions
Outputs:
- Monitoring reports
- Investigation records
- Annual compliance review report
- Corrective action tracking
COMPLIANCE
5.1 Compliance Monitoring
Monitoring Approach: Continuous automated monitoring supplemented by monthly manual reviews and quarterly comprehensive assessments, with annual full compliance review.
Compliance Metrics:
| Metric | Target | Measurement Method | Frequency | Owner |
|---|---|---|---|---|
| Prohibited Practice Screening Rate | 100% | % of AI systems screened before deployment | Quarterly | AI Act Program Manager |
| Prohibited Practice Incident Rate | 0 | Count of prohibited practice incidents | Quarterly | AI Act Program Manager |
| Compliance Review Completion | 100% | % of annual reviews completed on time | Annually | AI Act Program Manager |
| Biometric System Inventory Coverage | 100% | % of biometric systems inventoried | Quarterly | AI Act Program Manager |
| Monitoring System Uptime | ≥99% | % of time monitoring systems operational | Monthly | AI Operations Team |
Monitoring Tools:
- AI System Inventory and Screening Registry
- Prohibited Practices Monitoring Dashboard
- Compliance Reports
- Monthly compliance reports
- Quarterly AI Governance Committee reviews
5.2 Internal Audit Requirements
Audit Frequency: Annually (minimum); ad hoc following any suspected violation
Audit Scope:
- Prohibited practice screening completeness and quality
- Biometric system inventory accuracy
- Social scoring and predictive policing control effectiveness
- Subliminal/manipulative technique assessment adequacy
- Monitoring system effectiveness
- Controls effectiveness (PROH-001 through PROH-005)
Audit Activities:
- Review 100% of AI system screening records
- Verify biometric system inventory against actual deployed systems
- Test monitoring system detection capabilities
- Review escalation and incident records
- Interview key personnel on screening procedures
Audit Outputs:
- Annual Prohibited Practices Compliance Audit Report
- Findings and recommendations
- Corrective action plans for deficiencies
5.3 External Audit / Regulatory Inspection
Preparation:
- Maintain audit-ready prohibited practices documentation at all times
- Designate AI Act Program Manager and Legal as regulatory liaisons
- Prepare standard response procedures for authority requests
Provide to Auditors/Regulators:
- Prohibited practices register
- AI system screening records
- Biometric system inventory
- Monitoring logs and reports
- Compliance review reports
- Internal audit reports
- Evidence of controls execution
Authority Request Response:
- Acknowledge request within 1 business day
- Provide requested documentation within 5 business days
- Coordinate through Legal and AI Act Program Manager
- Document all interactions with authorities
Regulatory Penalty Context: Non-compliance with Article 5 prohibited practices carries the highest penalty tier under the EU AI Act: administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover of the preceding financial year, whichever is higher. This underscores the critical importance of maintaining complete and auditable compliance documentation.
ROLES AND RESPONSIBILITIES
6.1 RACI Matrix
| Activity | AI Act Program Manager | Legal | AI Operations Team | AI System Owners | AI Governance Committee |
|---|---|---|---|---|---|
| Prohibited Practice Screening | R/A | C | C | R | I |
| Subliminal/Manipulative Assessment | R | C | C | R | I |
| Biometric System Inventory | R/A | C | R | R | I |
| Social Scoring/Predictive Policing Review | R/A | R | C | C | I |
| Ongoing Monitoring | A | I | R | C | I |
| Annual Compliance Review | R/A | C | R | C | A |
| Incident Response | R | R | R | C | A |
| Regulatory Engagement | C | R/A | I | I | A |
RACI Legend:
- R = Responsible (does the work)
- A = Accountable (ultimately answerable)
- C = Consulted (provides input)
- I = Informed (kept up-to-date)
6.2 Role Descriptions
AI Act Program Manager
- Primary Responsibility: Owns the prohibited practices compliance framework, conducts screenings, and coordinates compliance reviews
- Key Activities:
- Maintains prohibited practices register
- Conducts and oversees pre-deployment screening
- Leads annual compliance reviews
- Reports to AI Governance Committee
- Coordinates incident response for prohibited practice discoveries
- Required Competencies: EU AI Act Article 5 expertise, AI risk assessment, compliance management
Legal
- Primary Responsibility: Provides legal interpretation of Article 5, supports regulatory engagement
- Key Activities:
- Advises on Article 5 interpretation and application
- Reviews escalated screening outcomes
- Manages regulatory authority engagement
- Tracks evolving case law and guidance
- Required Competencies: EU AI Act legal expertise, data protection law, regulatory affairs
AI Operations Team
- Primary Responsibility: Implements monitoring mechanisms, supports screening, executes incident response
- Key Activities:
- Deploys and maintains monitoring systems
- Supports technical screening assessments
- Executes system suspension in incident response
- Maintains biometric system inventory
- Required Competencies: AI system operations, monitoring tools, incident response
AI System Owners
- Primary Responsibility: Ensure their AI systems comply with Article 5, participate in screening
- Key Activities:
- Submit AI systems for screening
- Provide system documentation for assessment
- Implement corrective actions
- Report suspected prohibited practice indicators
- Required Competencies: Understanding of their AI system functionality, Article 5 awareness
AI Governance Committee
- Primary Responsibility: Oversight and accountability for prohibited practices compliance
- Key Activities:
- Reviews quarterly compliance reports
- Approves deployment of systems with elevated screening outcomes
- Oversees incident response for critical prohibited practice discoveries
- Approves corrective action plans
- Required Competencies: AI governance, strategic risk management, EU AI Act oversight
EXCEPTIONS
7.1 Exception Philosophy
Prohibited AI practices under Article 5 are absolute prohibitions established by EU law. The organisation's ability to grant exceptions is extremely limited and applies only to process-related aspects, never to the substantive prohibitions themselves.
7.2 Allowed Exceptions
The following process-related exceptions may be granted with proper justification and approval:
| Exception Type | Justification Required | Maximum Duration | Approval Authority | Compensating Controls |
|---|---|---|---|---|
| Extended Screening Timeline | Technical complexity requires additional analysis time | 15 business days | AI Act Program Manager | System not deployed until screening complete |
| Alternative Screening Method | Standard screening method not suitable for system type | Permanent | AI Act Program Manager + Legal | Document rationale; Verify equivalent rigour |
7.3 Prohibited Exceptions
The following exceptions cannot be granted under any circumstances:
- Deploying a system identified as a prohibited practice -- Article 5 prohibitions are absolute; no business justification can override them
- Skipping prohibited practice screening -- All AI systems must be screened; no exceptions
- Waiving biometric system inventory requirements -- All biometric AI systems must be inventoried and assessed
- Exempting third-party AI from screening -- Third-party AI systems must be screened equally
- Delaying incident response for a discovered prohibited practice -- Immediate suspension is mandatory
7.4 Exception Request Process
Step 1: Submit Exception Request
- Complete Exception Request Form (FORM-AI-EXCEPTION-001)
- Include business justification (process exception only)
- Propose compensating controls
- Specify duration requested
- Attach risk assessment
Step 2: Risk Assessment
- AI Act Program Manager assesses risk of granting process exception
- Legal reviews to confirm exception does not compromise Article 5 compliance
- Documents residual risk
Step 3: Approval
- Route to appropriate approval authority based on exception type
- AI Act Program Manager approval: Minor process exceptions
- AI Act Program Manager + Legal: Significant process exceptions
- AI Governance Committee: Any exception that could affect compliance posture
Step 4: Documentation and Monitoring
- Document exception in Exception Register
- Assign exception owner
- Set review date
- Monitor compensating controls
- Report exceptions quarterly to AI Governance Committee
Step 5: Exception Review and Closure
- Review exception at specified review date
- Assess if exception is still needed
- Close exception when standard process resumes
- Document lessons learned
ENFORCEMENT
8.1 Non-Compliance Consequences
| Violation | Severity | Consequence | Remediation Required |
|---|---|---|---|
| Deploying a prohibited AI system | Critical | Immediate system suspension; Executive escalation; Potential regulatory notification | Remove system; Root cause analysis; Regulatory engagement |
| Failing to screen AI system before deployment | Critical | System suspension until screening completed; Formal investigation | Complete screening immediately; Disciplinary review |
| Incomplete biometric system inventory | High | Escalation to AI Governance Committee | Complete inventory within 10 business days |
| Failure to conduct annual compliance review | High | Escalation to AI Governance Committee | Complete review within 15 business days |
| Delayed incident response | High | Formal investigation | Immediate corrective action; Process improvement |
| Incomplete screening documentation | Medium | Written warning; Corrective action required | Complete documentation within 5 business days |
8.2 Escalation Procedures
Level 1: AI Act Program Manager
- Minor documentation deficiencies
- Screening delays < 3 days
- Action: Written warning, corrective action required
Level 2: AI Act Program Manager + Legal
- Repeated screening failures
- Potential prohibited practice indicators identified
- Action: Formal review, corrective action plan, AI Governance Committee notification
Level 3: AI Governance Committee
- Confirmed or suspected prohibited practice in deployed system
- Systemic screening failures
- Action: Immediate system suspension, investigation, management notification
Level 4: Executive Management + Legal
- Confirmed prohibited practice violation with regulatory exposure
- Regulatory inquiry or enforcement action
- Significant legal or reputational risk
- Action: Executive crisis management, legal strategy, regulatory engagement, consider voluntary self-reporting
8.3 Immediate Escalation Triggers
Escalate immediately to AI Governance Committee + Legal if:
- A deployed AI system is identified as potentially engaging in a prohibited practice
- A regulatory authority contacts the organisation regarding Article 5 compliance
- A whistleblower report alleges a prohibited practice
- A third-party AI provider is found to have violated Article 5
- Media reporting identifies a potential prohibited practice in the organisation's AI systems
8.4 Disciplinary Actions
Individuals responsible for prohibited practice violations may be subject to:
- Verbal or written warning
- Mandatory retraining on Article 5 requirements
- Performance improvement plan
- Reassignment of responsibilities
- Suspension (with pay during investigation)
- Termination (for knowingly deploying a prohibited AI system or deliberately bypassing screening)
Factors Considered:
- Intent (knowing violation vs. honest mistake)
- Severity of violation
- Impact (actual or potential, including fundamental rights impact)
- Cooperation with remediation and investigation
- Prior violation history
KEY PERFORMANCE INDICATORS (KPIs)
9.1 Prohibited Practices KPIs
| KPI ID | KPI Name | Definition | Target | Measurement Method | Frequency | Owner | Reporting To |
|---|---|---|---|---|---|---|---|
| KPI-PROH-001 | Prohibited Practice Screening Rate | % of AI systems screened for prohibited practices before deployment | 100% | (# screened / # total) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-PROH-002 | Prohibited Practice Incident Rate | Number of prohibited practice incidents detected | 0 | Count of incidents | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-PROH-003 | Compliance Review Completion | % of annual compliance reviews completed on time | 100% | (# completed on time / # total) x 100 | Annually | AI Act Program Manager | AI Governance Committee |
9.2 KPI Dashboards and Reporting
Real-Time Dashboard (AI Act Program Manager access)
- Current screening status of all AI systems
- Biometric system inventory status
- Monitoring alert status
- Open investigations
Monthly Management Report
- KPI-PROH-001, KPI-PROH-002
- Screening activity summary
- Monitoring findings summary
- Issues and risks
Quarterly AI Governance Committee Report
- All KPIs
- Screening outcome summary
- Monitoring findings and actions
- Internal audit findings (if conducted)
- Exception register review
- Regulatory guidance updates
Annual Executive Report
- Full-year KPI performance
- Annual compliance review findings
- Prohibited practices compliance maturity assessment
- Strategic recommendations
- Regulatory outlook and emerging risks
9.3 KPI Thresholds and Alerts
| KPI | Green (Good) | Yellow (Warning) | Red (Critical) | Alert Action |
|---|---|---|---|---|
| Screening Rate | 100% | 95-99% | < 95% | Red: Immediate escalation to AI Governance Committee Chair |
| Incident Rate | 0 | 1 (suspected, under investigation) | ≥1 (confirmed) | Red: Immediate escalation to AI Governance Committee + Legal + Executive Management |
| Compliance Review Completion | 100% | On track but delayed | Overdue by > 30 days | Red: Escalation to AI Governance Committee |
TRAINING REQUIREMENTS
10.1 Training Program Overview
All personnel involved in AI system development, deployment, procurement, or oversight must complete training on Article 5 prohibited practices to ensure they can identify and prevent prohibited AI practices.
10.2 Role-Based Training Requirements
| Role | Training Course | Duration | Content | Frequency | Assessment Required |
|---|---|---|---|---|---|
| AI Act Program Manager | Prohibited Practices Expert Training | 8 hours | All Article 5 prohibitions in depth; Screening methodology; Incident response; Regulatory engagement | Initial + annually | Yes - Written exam (>=90%) |
| Legal | Prohibited Practices Legal Training | 8 hours | Article 5 legal interpretation; Case law; Enforcement; Regulatory engagement | Initial + annually | Yes - Written exam (>=90%) |
| AI Operations Team | Prohibited Practices Operational Training | 4 hours | Article 5 overview; Monitoring implementation; Incident response procedures | Initial + annually | Yes - Knowledge check (>=80%) |
| AI System Owners | Prohibited Practices Awareness Training | 4 hours | Article 5 overview; Screening process; Reporting obligations | Initial + annually | Yes - Knowledge check (>=80%) |
| All Staff | AI Prohibited Practices Awareness | 1 hour | Article 5 overview; How to recognise and report concerns | At onboarding + annually | Yes - Knowledge check (>=80%) |
10.3 Training Content by Topic
Article 5 Prohibited Practices
- Complete overview of all eight prohibited practice categories
- Real-world examples and case studies for each prohibition
- How to identify indicators of prohibited practices
- Screening process and methodology
Biometric AI Compliance
- Biometric data processing under the AI Act
- Prohibited biometric uses vs. permitted uses
- Emotion inference boundaries
- Biometric categorisation rules
Social Scoring and Profiling
- What constitutes social scoring under Article 5(1)(c)
- Predictive policing boundaries under Article 5(1)(d)
- Compliant vs. non-compliant scoring and classification approaches
Incident Response
- How to report suspected prohibited practices
- Incident response timeline and responsibilities
- Regulatory notification obligations
10.4 Training Delivery Methods
Initial Training:
- Instructor-led classroom or virtual training
- Includes real-world case studies and scenario exercises
- Hands-on practice with screening checklists
- Group discussions of borderline cases
Annual Refresher:
- E-learning modules for core content review
- Live update sessions for new regulatory guidance and case law
- Case study reviews of recent screening activities
- Knowledge assessment
On-the-Job Training:
- Mentoring for new screening personnel
- Supervised screening for first 5 AI systems
- Job shadowing during compliance reviews
Just-in-Time Training:
- Quick reference guides for each Article 5 prohibition
- Screening checklist guides
- Help desk support from AI Act Program Manager
10.5 Training Effectiveness Measurement
Assessment Methods:
- Written exams for knowledge retention
- Scenario-based exercises for practical application
- On-the-job observations during screening
- Feedback surveys for training quality
Competency Validation:
- Screening personnel: Must demonstrate ability to correctly screen 3 AI systems (including 1 borderline case) before independent screening
- All staff: Must pass knowledge assessments with minimum required scores
Training Metrics:
| Metric | Target | Frequency |
|---|---|---|
| Training completion rate | 100% | Quarterly |
| Assessment pass rate (first attempt) | >= 90% | Per training |
| Training effectiveness score (survey) | >= 4.0/5.0 | Per training |
| Time to competency (screening personnel) | < 30 days | Per person |
10.6 Training Records
Records Maintained:
- Training attendance records
- Assessment scores
- Competency validations
- Refresher training completion
- Individual training transcripts
Retention: 10 years (to align with EU AI Act documentation retention)
Access: AI Act Program Manager, HR, Internal Audit, Competent Authorities (upon request)
DEFINITIONS
| Term | Definition | Source |
|---|---|---|
| Prohibited AI Practice | An AI practice that is banned under Article 5 of the EU AI Act due to its unacceptable risk to fundamental rights | EU AI Act Article 5 |
| Subliminal Technique | A technique that deploys components below the threshold of conscious awareness to materially distort behaviour | EU AI Act Article 5(1)(a) |
| Social Scoring | Evaluating or classifying natural persons based on their social behaviour or known, inferred, or predicted personal characteristics, where the resulting score leads to detrimental treatment | EU AI Act Article 5(1)(c) |
| Biometric Categorisation | Using biometric data to categorise natural persons according to specific categories such as race, political opinions, or religious beliefs | EU AI Act Article 5(1)(g) |
| Real-Time Remote Biometric Identification | Using AI to identify natural persons at a distance in real time in publicly accessible spaces, typically through facial recognition | EU AI Act Article 5(1)(h) |
| Emotion Inference | Using AI to infer the emotional state of a natural person based on biometric data or behavioural indicators | EU AI Act Article 5(1)(f) |
| Predictive Policing | Using AI to assess the risk that a specific natural person will commit a criminal offence, based on profiling or personality traits | EU AI Act Article 5(1)(d) |
| Screening | The process of assessing an AI system against Article 5 prohibited practices before deployment | This Standard |
LINK WITH AI ACT AND ISO42001
12.1 EU AI Act Regulatory Mapping
This standard implements the following EU AI Act requirements:
| EU AI Act Provision | Article | Requirement Summary | Implemented By (Controls) |
|---|---|---|---|
| Prohibited practices - subliminal techniques | Article 5(1)(a) | Prohibition on AI deploying subliminal techniques beyond consciousness | PROH-001, PROH-002 |
| Prohibited practices - exploitation of vulnerabilities | Article 5(1)(b) | Prohibition on AI exploiting vulnerabilities due to age, disability, or situation | PROH-001, PROH-002 |
| Prohibited practices - social scoring | Article 5(1)(c) | Prohibition on social scoring leading to detrimental treatment | PROH-001, PROH-004 |
| Prohibited practices - predictive policing | Article 5(1)(d) | Prohibition on profiling-only predictive policing | PROH-001, PROH-004 |
| Prohibited practices - facial recognition scraping | Article 5(1)(e) | Prohibition on untargeted facial recognition database building | PROH-001, PROH-003 |
| Prohibited practices - emotion inference | Article 5(1)(f) | Prohibition on emotion inference in workplace/education | PROH-001, PROH-003 |
| Prohibited practices - biometric categorisation | Article 5(1)(g) | Prohibition on biometric categorisation by protected characteristics | PROH-001, PROH-003 |
| Prohibited practices - real-time biometric ID | Article 5(1)(h) | Prohibition on real-time remote biometric identification for law enforcement | PROH-001, PROH-003 |
| Ongoing compliance | Article 5 (general) | Ongoing obligation to ensure no prohibited practice is deployed | PROH-005 |
12.2 ISO/IEC 42001:2023 Alignment
This standard aligns with ISO/IEC 42001:2023 as follows:
| ISO 42001 Clause | Requirement | Implementation in This Standard |
|---|---|---|
| Clause 6.1: Actions to address risks | Identify and address risks including compliance risks | PROH-001 (screening), PROH-005 (monitoring) |
| Clause 8.1: Operational planning and control | Plan and control processes to meet requirements | PROH-001 through PROH-004 (preventive controls) |
| Clause 9.1: Monitoring, measurement, analysis | Monitor and measure AI management system performance | PROH-005 (ongoing monitoring) |
| Clause 10.1: Nonconformity and corrective action | Address nonconformities and take corrective action | PROH-005 (incident response) |
12.3 Relationship to Other Standards
This prohibited practices standard integrates with other AI Act standards:
| Related Standard | Integration Point | Rationale |
|---|---|---|
| STD-AI-001: Classification | Classification must include prohibited practice screening | Systems must be screened for prohibited practices as part of classification |
| STD-AI-002: Risk Management | Prohibited practices represent unacceptable risk level | Risk management framework must identify and prevent prohibited practices |
| STD-AI-003: Data Governance | Data used in biometric and scoring systems must be governed | Biometric data and scoring data require specific governance controls |
| STD-AI-006: Transparency | Prohibited practice screening results inform transparency obligations | Screening documentation supports transparency requirements |
| STD-AI-007: Human Oversight | Human oversight required for borderline cases | Human review essential for systems near prohibited practice boundaries |
| STD-AI-013: Incident Management | Prohibited practice discoveries are critical incidents | Incident management procedures must cover prohibited practice discoveries |
| STD-AI-014: Literacy and Training | Staff must be trained on prohibited practices | Training curriculum must include Article 5 prohibited practices |
12.4 References and Related Documents
EU AI Act (Regulation (EU) 2024/1689):
- Article 5: Prohibited AI Practices
- Article 5(1)(a): Subliminal techniques
- Article 5(1)(b): Exploitation of vulnerabilities
- Article 5(1)(c): Social scoring
- Article 5(1)(d): Predictive policing
- Article 5(1)(e): Untargeted facial recognition scraping
- Article 5(1)(f): Emotion inference in workplace/education
- Article 5(1)(g): Biometric categorisation by protected characteristics
- Article 5(1)(h): Real-time remote biometric identification
- Article 99(2): Penalties for prohibited practices (EUR 35 million or 7% of global turnover)
- Recitals 28-45: Explanatory context for prohibited practices
ISO/IEC Standards:
- ISO/IEC 42001:2023: Information technology -- Artificial intelligence -- Management system
Internal Documents:
- POL-AI-001: Artificial Intelligence Policy (parent policy)
- STD-AI-001: AI System Classification Standard
- STD-AI-002: AI Risk Management Standard
- STD-AI-003: AI Data Governance Standard
- STD-AI-006: AI Transparency Standard
- STD-AI-007: AI Human Oversight Standard
- STD-AI-013: AI Incident Management Standard
- STD-AI-014: AI Literacy and Training Standard
- PROC-AI-PROH-001, -002: Prohibited practices procedures
APPROVAL AND AUTHORIZATION
| Role | Name | Title | Signature | Date |
|---|---|---|---|---|
| Prepared By | Sarah Johnson | AI Act Program Manager | _________________ | ________ |
| Reviewed By | Legal Counsel | Legal Director | _________________ | ________ |
| Reviewed By | Jane Doe | Chief Strategy & Risk Officer | _________________ | ________ |
| Approved By | Jane Doe | AI Governance Committee Chair | _________________ | ________ |
Effective Date: 2025-02-02 Next Review Date: 2026-02-02 Review Frequency: Annually or upon regulatory change
END OF STANDARD STD-AI-015
This standard is a living document. Feedback and improvement suggestions should be directed to the AI Act Program Manager.
Standard ID
STD-AI-015
Version
1.0
Status
draftOwner
AI Act Program Manager
Effective Date
2025-02-02
Applicability
All AI systems