AI Deployer Obligations Standard
Requirements for deployers of high-risk AI systems under Articles 26, 27, and 86.
7
Controls
0
Compliant
0
In Progress
7
Not Started
Use in Accordance with Instructions
Ensure systems are used per provider instructions and input data is appropriate
Human Oversight Assignment
Assign qualified human oversight personnel
Operational Monitoring and Risk Reporting
Monitor AI system operation and report risks/incidents
Log Retention
Retain automatically generated logs for at least 6 months
Worker and Affected Person Information
Inform workers and affected persons about AI system use
Fundamental Rights Impact Assessment
Assess impact on fundamental rights before deploying high-risk AI
Right to Explanation
Enable affected persons to obtain explanations of AI-based decisions
AI Deployer Obligations Standard
Document Type: Standard Standard ID: STD-AI-016 Standard Title: AI Deployer Obligations Standard Version: 1.0 Effective Date: 2026-08-02 Next Review Date: 2027-08-02 Review Frequency: Annually or upon regulatory change Parent Policy: POL-AI-001 - Artificial Intelligence Policy Owner: AI Act Program Manager Approved By: AI Governance Committee Chair Status: Draft Classification: Internal Use Only
TABLE OF CONTENTS
- Document History
- Objective
- Scope and Applicability
- Control Standard
- Supporting Procedures
- Compliance
- Roles and Responsibilities
- Exceptions
- Enforcement
- Key Performance Indicators (KPIs)
- Training Requirements
- Definitions
- Link with AI Act and ISO42001
DOCUMENT HISTORY
| Version | Date | Author | Changes | Approval Date | Approved By |
|---|---|---|---|---|---|
| 0.1 | 2026-07-01 | AI Act Program Manager | Initial draft | - | - |
| 0.2 | 2026-07-15 | AI Act Program Manager | Added FRIA and right to explanation controls | - | - |
| 0.3 | 2026-07-25 | AI Act Program Manager | Incorporated legal review feedback | - | - |
| 1.0 | 2026-08-02 | AI Act Program Manager | Final version approved | 2026-08-01 | Jane Doe, AI Governance Committee Chair |
OBJECTIVE
This standard defines requirements for deployers of high-risk AI systems under EU AI Act Articles 26, 27, and 86. It ensures that organisations deploying high-risk AI systems fulfil their obligations regarding use in accordance with instructions, human oversight, operational monitoring, log retention, worker information, fundamental rights impact assessment, and the right to explanation.
Primary Goals:
- Ensure high-risk AI systems are used in accordance with provider instructions
- Assign competent human oversight for all high-risk AI systems
- Establish monitoring, risk reporting, and incident management processes
- Retain automatically generated logs for the required period
- Inform workers and affected persons about AI system use
- Conduct fundamental rights impact assessments before deployment
- Enable affected persons to obtain explanations of AI-based decisions
SCOPE AND APPLICABILITY
2.1 Mandatory Applicability
This standard is mandatory for:
- All high-risk AI systems deployed by the organisation
- All personnel responsible for deploying or operating high-risk AI systems
- All AI systems used in the workplace that affect workers
- All AI systems whose output influences decisions affecting natural persons
2.2 Recommended Applicability
This standard is recommended for:
- Limited-risk AI systems used in operational decisions
- AI systems not classified as high-risk but used in sensitive contexts
- Third-party AI systems integrated into business processes
2.3 Deployer Obligations Covered
- Use in accordance with instructions (Article 26(1))
- Human oversight assignment (Article 26(2))
- Input data relevance and representativeness (Article 26(4))
- Operational monitoring and risk reporting (Article 26(5))
- Log retention (Article 26(6))
- Worker and affected person information (Article 26(7), Article 26(11))
- Fundamental rights impact assessment (Article 27)
- Right to explanation (Article 86)
2.4 Out of Scope
- Provider obligations (covered by STD-AI-015)
- AI system development and training (covered by provider standards)
- General-purpose AI model obligations (covered by STD-AI-015)
- Non-AI automated systems
CONTROL STANDARD
Control DEP-001: Use in Accordance with Instructions
Control ID: DEP-001 Control Name: Use in Accordance with Provider Instructions Control Type: Preventive Control Frequency: Per deployment, ongoing Risk Level: High
Control Objective
Ensure all high-risk AI systems are used in accordance with the provider's instructions for use, including technical and organisational measures, and that input data is relevant and sufficiently representative as required by Article 26(1) and Article 26(4).
Control Requirements
CR-001.1: Instruction Review and Implementation
Obtain, review, and implement provider instructions for use before deploying any high-risk AI system.
Instruction Compliance Checklist:
| Requirement | Description | Article Reference | Verification |
|---|---|---|---|
| Obtain Instructions | Receive instructions for use from provider | Article 26(1) | Instructions on file |
| Review Instructions | Assess technical and organisational requirements | Article 26(1) | Review record signed |
| Implement Technical Measures | Configure system per technical instructions | Article 26(1) | Configuration verified |
| Implement Organisational Measures | Establish processes per organisational instructions | Article 26(1) | Processes documented |
| Assess Input Data | Ensure input data relevance and representativeness | Article 26(4) | Data assessment completed |
| Document Compliance | Record all compliance activities | Article 26(1) | Compliance record maintained |
Mandatory Actions:
- Obtain and review provider instructions for use before deployment
- Implement technical and organisational measures per provider instructions
- Ensure input data relevance and representativeness per Article 26(4)
- Document compliance with instructions for use
- Review compliance upon system updates or instruction changes
- Maintain records of all instructions for use received
CR-001.2: Input Data Management
Ensure input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
Input Data Requirements:
| Requirement | Description | Verification Method | Frequency |
|---|---|---|---|
| Relevance | Input data must be relevant to the AI system's intended purpose | Data relevance assessment | Per deployment |
| Representativeness | Input data must be sufficiently representative | Statistical analysis | Per deployment, periodic review |
| Quality | Input data must meet quality standards defined by provider | Data quality checks | Ongoing |
| Currency | Input data must be current and up to date | Data freshness verification | Ongoing |
Evidence Required:
- Instructions for use from provider
- Deployment checklists
- Input data relevance assessments
- Compliance verification records
- System configuration documentation
- Update review records
Audit Verification:
- Verify instructions for use obtained before deployment
- Confirm technical and organisational measures implemented
- Check input data assessments completed
- Validate compliance documentation maintained
Control DEP-002: Human Oversight Assignment
Control ID: DEP-002 Control Name: Human Oversight Assignment and Competency Control Type: Preventive Control Frequency: Per deployment, ongoing review Risk Level: Critical
Control Objective
Assign human oversight of high-risk AI systems to natural persons who have the necessary competence, training, authority, and support to fulfil that role effectively, as required by Article 26(2).
Control Requirements
CR-002.1: Oversight Personnel Identification and Assignment
Identify and assign qualified natural persons to provide human oversight for each high-risk AI system.
Oversight Assignment Requirements:
| Requirement | Description | Article Reference | Verification |
|---|---|---|---|
| Identify Roles | Define oversight roles for each AI system | Article 26(2) | Role descriptions documented |
| Assess Competency | Verify personnel have necessary competence | Article 26(2) | Competency assessments completed |
| Provide Training | Train oversight personnel on AI system and limitations | Article 26(2) | Training certificates on file |
| Grant Authority | Provide authority to override or intervene | Article 26(2) | Authority delegation documented |
| Ensure Support | Provide adequate resources and tools | Article 26(2) | Resource allocation confirmed |
| Document Assignments | Record all assignments and assessments | Article 26(2) | Assignment records maintained |
Competency Requirements for Oversight Personnel:
| Competency Area | Description | Assessment Method | Minimum Standard |
|---|---|---|---|
| AI System Knowledge | Understanding of the specific AI system's operation | Written assessment | ≥90% |
| Limitations Awareness | Knowledge of system limitations and failure modes | Scenario-based assessment | Pass/fail |
| Override Procedures | Ability to intervene and override system decisions | Practical exercise | Demonstrated competency |
| Risk Recognition | Ability to identify risks and anomalies | Case study analysis | ≥90% |
| Regulatory Awareness | Understanding of EU AI Act deployer obligations | Knowledge check | ≥80% |
Mandatory Actions:
- Identify human oversight roles for each high-risk AI system
- Verify competency of designated oversight personnel
- Provide adequate training on AI system operation and limitations
- Grant override and intervention authority to oversight personnel
- Ensure adequate resources and support for oversight functions
- Document all oversight assignments and competency assessments
Evidence Required:
- Human oversight assignment records
- Competency assessment reports
- Training certificates for oversight personnel
- Authority delegation documents
- Resource allocation records
- Oversight role descriptions
Audit Verification:
- Verify oversight personnel assigned for all high-risk AI systems
- Confirm competency assessments completed and passed
- Check training certificates valid and current
- Validate authority delegation documents signed
- Verify adequate resources allocated
Control DEP-003: Operational Monitoring and Risk Reporting
Control ID: DEP-003 Control Name: Operational Monitoring and Risk Reporting Control Type: Detective Control Frequency: Continuous monitoring, per incident Risk Level: High
Control Objective
Monitor the operation of high-risk AI systems on the basis of the instructions for use and report risks, serious incidents, and malfunctions to providers and relevant authorities as required by Article 26(5).
Control Requirements
CR-003.1: Monitoring Framework
Implement monitoring processes aligned with provider instructions for use.
Monitoring Requirements:
| Monitoring Area | Description | Method | Frequency | Threshold |
|---|---|---|---|---|
| System Performance | Monitor AI system performance metrics | Automated monitoring | Continuous | Per provider instructions |
| Output Quality | Assess quality and accuracy of AI outputs | Sampling and review | Daily/weekly | Quality thresholds defined |
| Anomaly Detection | Detect unusual patterns or outputs | Automated alerts | Continuous | Alert thresholds defined |
| Risk Indicators | Monitor risk indicators per instructions | Dashboard monitoring | Continuous | Risk thresholds defined |
| User Feedback | Collect and analyse user feedback | Feedback mechanisms | Ongoing | Trend analysis |
CR-003.2: Incident Reporting and Escalation
Report serious incidents, malfunctions, and risks to providers and authorities.
Incident Reporting Requirements:
| Incident Type | Report To | Timeframe | Article Reference |
|---|---|---|---|
| Serious Incident | Provider + Market Surveillance Authority | Immediately upon identification | Article 26(5) |
| Risk to Health/Safety | Provider + Suspend use | Immediately | Article 26(5) |
| Risk to Fundamental Rights | Provider + Suspend use | Immediately | Article 26(5) |
| Malfunction | Provider | Within 24 hours | Article 26(5) |
| Performance Degradation | Provider | Within 72 hours | Article 26(5) |
Escalation Procedure:
| Level | Trigger | Action | Decision Authority |
|---|---|---|---|
| Level 1 | Performance anomaly detected | Investigate and document | AI System Operator |
| Level 2 | Confirmed malfunction or risk | Notify provider, assess severity | AI Act Program Manager |
| Level 3 | Serious incident or risk to persons | Suspend use, notify authorities | AI Governance Committee |
| Level 4 | Imminent risk to health/safety | Immediate cessation, emergency notification | Executive Management |
Mandatory Actions:
- Implement monitoring processes aligned with provider instructions
- Define risk thresholds and alert mechanisms
- Establish escalation procedures for identified risks
- Suspend or cease use of AI system when risk to health, safety, or fundamental rights is identified
- Report serious incidents to provider and market surveillance authority
- Maintain comprehensive monitoring and incident records
Evidence Required:
- Monitoring logs and dashboards
- Risk assessment reports
- Incident notification records
- Suspension and cessation records
- Escalation procedure documentation
- Authority communication records
Audit Verification:
- Verify monitoring processes implemented per instructions
- Confirm risk thresholds defined and active
- Check incident reports submitted within required timeframes
- Validate suspension/cessation decisions documented
- Verify authority notifications completed
Control DEP-004: Log Retention
Control ID: DEP-004 Control Name: Automatic Log Retention and Management Control Type: Preventive Control Frequency: Continuous, periodic review Risk Level: Medium
Control Objective
Retain automatically generated logs of high-risk AI systems for a period appropriate to the intended purpose and of at least six months, as required by Article 26(6), unless provided for in applicable Union or national law.
Control Requirements
CR-004.1: Log Storage and Retention
Configure and maintain log storage for all high-risk AI systems.
Log Retention Requirements:
| Requirement | Description | Minimum Standard | Verification |
|---|---|---|---|
| Retention Period | Minimum retention of automatically generated logs | 6 months (or longer per applicable law) | Retention policy documented |
| Log Completeness | All automatically generated logs must be retained | 100% of logs captured | Log capture verification |
| Log Integrity | Logs must be protected against tampering | Integrity controls implemented | Integrity checks passed |
| Log Accessibility | Logs must be accessible to authorities on request | Access procedures defined | Access test completed |
| Storage Security | Logs must be stored securely | Encryption and access controls | Security audit passed |
Log Types to Retain:
| Log Type | Description | Retention Period | Storage |
|---|---|---|---|
| System Logs | Automatically generated operational logs | Minimum 6 months | Secure storage |
| Decision Logs | AI system decision outputs and parameters | Minimum 6 months | Secure storage |
| Input Logs | Input data processed by the AI system | Minimum 6 months | Secure storage |
| Error Logs | System errors, warnings, and anomalies | Minimum 6 months | Secure storage |
| Access Logs | User access and interaction logs | Minimum 6 months | Secure storage |
Mandatory Actions:
- Configure log storage for all high-risk AI systems
- Implement retention policies of at least 6 months
- Ensure log integrity and protection against tampering
- Provide logs to market surveillance authorities on request
- Document log retention policies and storage configurations
- Conduct periodic verification of log availability and integrity
Evidence Required:
- Log storage configuration documentation
- Retention policy documents
- Log integrity verification records
- Authority access and request logs
- Storage capacity monitoring records
- Periodic review reports
Audit Verification:
- Verify log storage configured for all high-risk AI systems
- Confirm retention period meets minimum 6-month requirement
- Check log integrity controls in place
- Validate authority access procedures tested
- Verify periodic reviews conducted
Control DEP-005: Worker and Affected Person Information
Control ID: DEP-005 Control Name: Worker and Affected Person Information and Notification Control Type: Preventive Control Frequency: Per deployment, ongoing Risk Level: High
Control Objective
Inform workers' representatives and affected workers before putting a high-risk AI system into use in the workplace (Article 26(7)), and inform natural persons subject to AI-assisted decisions that they are subject to the use of the high-risk AI system (Article 26(11)).
Control Requirements
CR-005.1: Worker Information (Article 26(7))
Inform workers' representatives and affected workers before deploying high-risk AI systems in the workplace.
Worker Notification Requirements:
| Requirement | Description | Timing | Recipient |
|---|---|---|---|
| Identify Affected Workers | Determine which workers are affected by AI deployment | Before deployment | Internal assessment |
| Notify Representatives | Inform workers' representatives per applicable law | Before deployment | Workers' representatives |
| Provide System Information | Explain what the AI system does, how it works, and its impact | Before deployment | All affected workers |
| Explain Rights | Inform workers of their rights regarding AI system use | Before deployment | All affected workers |
| Document Notifications | Record all notifications and acknowledgments | At notification | Compliance records |
| Update Notifications | Provide updated information when system use changes | Upon material change | All affected workers |
CR-005.2: Affected Person Information (Article 26(11))
Inform natural persons subject to AI-assisted decisions.
Affected Person Notification Requirements:
| Requirement | Description | Timing | Method |
|---|---|---|---|
| Identify Affected Persons | Determine natural persons subject to AI decisions | Before use | Process mapping |
| Provide Notice | Inform persons they are subject to AI system use | Before or at point of decision | Clear, accessible notice |
| Explain AI Role | Describe the role of the AI system in the decision | At point of decision | Written or electronic communication |
| Inform of Rights | Advise of right to explanation (Article 86) | At point of decision | Written or electronic communication |
Mandatory Actions:
- Identify all affected workers and workers' representatives
- Notify workers' representatives before workplace AI deployment
- Provide clear and accessible information about AI system use and its implications
- Inform natural persons subject to AI-based decisions per Article 26(11)
- Document all notifications and acknowledgments
- Update notifications when AI system use changes materially
Evidence Required:
- Worker notification records
- Workers' representative communication records
- Acknowledgment and receipt records
- Information materials provided
- Natural person notification records
- Update notification records
Audit Verification:
- Verify worker notifications completed before deployment
- Confirm workers' representatives informed per applicable law
- Check affected person notifications provided at point of decision
- Validate acknowledgment records maintained
- Verify update notifications issued upon material changes
Control DEP-006: Fundamental Rights Impact Assessment
Control ID: DEP-006 Control Name: Fundamental Rights Impact Assessment (FRIA) Control Type: Preventive Control Frequency: Before first deployment, upon material changes Risk Level: Critical
Control Objective
Conduct a fundamental rights impact assessment (FRIA) before putting a high-risk AI system into use, as required by Article 27, for public bodies and private entities operating in specified sectors.
Control Requirements
CR-006.1: FRIA Scope and Triggers
Determine when a FRIA is required and define its scope.
FRIA Mandatory Triggers:
| Trigger | Description | Article Reference |
|---|---|---|
| Public Body Deployment | Any high-risk AI deployment by a public body | Article 27(1) |
| Banking/Insurance | Private entities providing banking or insurance services | Article 27(1) |
| Critical Infrastructure | Private entities operating critical infrastructure | Article 27(1) |
| Material Change | Significant change to AI system use or context | Article 27(4) |
| New AI System | First deployment of a high-risk AI system | Article 27(1) |
CR-006.2: FRIA Content Requirements
Complete all required elements of the FRIA as specified in Article 27.
FRIA Required Content:
| Element | Description | Article Reference | Detail Required |
|---|---|---|---|
| Process Description | Describe deployer processes where AI will be used | Article 27(3)(a) | Detailed process mapping |
| Period and Frequency | Define deployment period and frequency of use | Article 27(3)(b) | Start date, duration, frequency |
| Affected Persons | Identify categories of persons and groups likely to be affected | Article 27(3)(c) | Comprehensive stakeholder mapping |
| Specific Risks | Assess specific risks of harm to identified persons and groups | Article 27(3)(d) | Risk analysis per affected group |
| Human Oversight | Document human oversight measures | Article 27(3)(e) | Oversight implementation details |
| Risk Mitigation | Define risk mitigation and governance measures | Article 27(3)(f) | Mitigation plan with responsibilities |
| Authority Notification | Notify market surveillance authority of FRIA results | Article 27(4) | Notification record |
FRIA Process Steps:
| Step | Activity | Responsibility | Output |
|---|---|---|---|
| 1. Scoping | Define FRIA scope and methodology | AI Act Program Manager | FRIA scope document |
| 2. Stakeholder Mapping | Identify affected persons and groups | FRIA Team | Stakeholder register |
| 3. Risk Assessment | Assess fundamental rights risks | FRIA Team | Risk assessment report |
| 4. Impact Analysis | Analyse potential impacts on fundamental rights | FRIA Team + Legal | Impact analysis |
| 5. Mitigation Planning | Define measures to mitigate identified risks | FRIA Team | Mitigation plan |
| 6. Review and Approval | Review FRIA for completeness and accuracy | AI Governance Committee | Approved FRIA |
| 7. Authority Notification | Notify market surveillance authority | AI Act Program Manager | Notification record |
| 8. Monitoring | Monitor effectiveness of mitigation measures | AI Act Program Manager | Monitoring reports |
Mandatory Actions:
- Describe deployer processes in which the AI system will be used
- Define the period of time and frequency of intended use
- Identify categories of natural persons and groups likely to be affected
- Assess specific risks of harm to identified persons and groups
- Document human oversight measures and their implementation
- Define risk mitigation and governance measures
- Notify the market surveillance authority of FRIA results
- Review and update FRIA upon material changes
Evidence Required:
- Completed FRIA reports
- Market surveillance authority notification records
- Risk mitigation plans
- Stakeholder consultation records
- Human oversight implementation documentation
- FRIA update and review records
Audit Verification:
- Verify FRIA completed before deployment for all required systems
- Confirm all Article 27(3) elements addressed
- Check market surveillance authority notified
- Validate risk mitigation measures implemented
- Verify FRIA updated upon material changes
Control DEP-007: Right to Explanation
Control ID: DEP-007 Control Name: Right to Explanation for AI-Based Decisions Control Type: Preventive Control Frequency: Per request, ongoing Risk Level: High
Control Objective
Enable any affected person subject to a decision taken by the deployer on the basis of the output from a high-risk AI system to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken, as required by Article 86.
Control Requirements
CR-007.1: Explanation Mechanisms
Implement mechanisms to provide clear and meaningful explanations of AI-based decisions.
Explanation Requirements:
| Requirement | Description | Article Reference | Standard |
|---|---|---|---|
| Role of AI System | Explain the role of the AI system in the decision | Article 86(1) | Clear and understandable |
| Main Decision Elements | Describe the main elements of the decision taken | Article 86(1) | Comprehensive |
| Meaningful Explanation | Explanation must be sufficiently detailed and understandable | Article 86(1) | Plain language |
| Timely Response | Provide explanation within reasonable timeframe | Article 86 | Within 30 days of request |
| Accessible Format | Explanation must be accessible to the affected person | Article 86 | Appropriate format |
Explanation Content Template:
| Element | Description | Example |
|---|---|---|
| Decision Summary | What decision was made | "Your application was [approved/rejected]" |
| AI System Role | How the AI system contributed | "The AI system assessed [factors] and provided a [score/recommendation]" |
| Key Factors | Main elements that influenced the decision | "The primary factors considered were [list factors]" |
| Human Involvement | Role of human oversight in the decision | "A human reviewer [reviewed/confirmed/overrode] the AI recommendation" |
| Rights Information | Information about further recourse | "You have the right to [appeal/complain] by [method]" |
CR-007.2: Explanation Process
Establish and maintain processes for handling explanation requests.
Explanation Process Steps:
| Step | Activity | Timeframe | Responsibility |
|---|---|---|---|
| 1. Receive Request | Log explanation request | Day 0 | Customer Service / Contact Point |
| 2. Acknowledge | Confirm receipt of request | Within 5 business days | Customer Service |
| 3. Gather Information | Collect decision details and AI system outputs | Days 1-10 | AI System Operator |
| 4. Prepare Explanation | Draft clear and meaningful explanation | Days 10-20 | AI Act Program Manager |
| 5. Review | Review explanation for accuracy and clarity | Days 20-25 | Legal |
| 6. Deliver | Provide explanation to affected person | Within 30 days | Customer Service |
| 7. Record | Document explanation provided | At delivery | Compliance |
Mandatory Actions:
- Implement mechanisms to provide clear and meaningful explanations of AI-based decisions
- Train staff on providing explanations to affected persons
- Document all explanation requests and explanations provided
- Maintain explanation records for audit and compliance purposes
- Ensure explanations are accessible and understandable to affected persons
- Establish processes for handling explanation requests within reasonable timeframes
Evidence Required:
- Explanation request logs
- Explanation records and responses
- Staff training records on explanation provision
- Explanation process documentation
- Response time tracking records
- Affected person feedback records
Audit Verification:
- Verify explanation mechanisms implemented
- Confirm staff trained on providing explanations
- Check explanation requests handled within required timeframes
- Validate explanation records complete and accurate
- Verify explanations clear, meaningful, and accessible
SUPPORTING PROCEDURES
This standard is implemented through the following detailed procedures:
Procedure PROC-AI-DEP-001: Deployer Compliance Procedure
Purpose: Define step-by-step process for deployer compliance with instructions for use Owner: AI Act Program Manager Implements: Controls DEP-001, DEP-004
Procedure Steps:
- Obtain instructions for use from provider - Control DEP-001
- Review and document compliance requirements
- Implement technical and organisational measures
- Assess input data relevance and representativeness
- Configure log retention - Control DEP-004
- Document compliance and maintain records
Outputs:
- Deployment compliance checklists
- Input data assessments
- Log retention configurations
- Compliance records
Procedure PROC-AI-DEP-002: Human Oversight Procedure
Purpose: Define process for assigning and maintaining human oversight Owner: AI Act Program Manager Implements: Control DEP-002
Procedure Steps:
- Identify oversight roles per AI system
- Assess candidate competency
- Provide required training
- Formally assign oversight responsibilities
- Grant override and intervention authority
- Monitor oversight effectiveness
Outputs:
- Oversight assignment records
- Competency assessments
- Training certificates
- Authority delegation documents
Procedure PROC-AI-DEP-003: Monitoring and Incident Reporting Procedure
Purpose: Define process for operational monitoring and incident reporting Owner: AI Act Program Manager Implements: Control DEP-003
Procedure Steps:
- Implement monitoring per provider instructions
- Define risk thresholds and alerts
- Detect and investigate anomalies
- Escalate per escalation procedure
- Report serious incidents to provider and authorities
- Suspend or cease use when required
Outputs:
- Monitoring dashboards
- Incident reports
- Authority notifications
- Suspension/cessation records
Procedure PROC-AI-DEP-004: Worker and Affected Person Notification Procedure
Purpose: Define process for notifying workers and affected persons Owner: HR Director Implements: Control DEP-005
Procedure Steps:
- Identify affected workers and representatives
- Prepare notification materials
- Deliver notifications before deployment
- Collect acknowledgments
- Inform natural persons at point of decision
- Update notifications upon material changes
Outputs:
- Notification records
- Acknowledgment records
- Communication materials
Procedure PROC-AI-DEP-005: Fundamental Rights Impact Assessment Procedure
Purpose: Define process for conducting FRIAs Owner: AI Act Program Manager Implements: Control DEP-006
Procedure Steps:
- Determine FRIA requirement and scope
- Map stakeholders and affected persons
- Assess fundamental rights risks
- Analyse potential impacts
- Define mitigation measures
- Submit for review and approval
- Notify market surveillance authority
- Monitor and update
Outputs:
- FRIA reports
- Authority notification records
- Mitigation plans
- Monitoring reports
Procedure PROC-AI-DEP-006: Right to Explanation Procedure
Purpose: Define process for handling explanation requests Owner: AI Act Program Manager Implements: Control DEP-007
Procedure Steps:
- Receive and log explanation request
- Acknowledge request within 5 business days
- Gather decision details and AI system outputs
- Prepare clear and meaningful explanation
- Review explanation for accuracy
- Deliver explanation within 30 days
- Record and archive
Outputs:
- Explanation request logs
- Explanation records
- Response time tracking
COMPLIANCE
5.1 Compliance Monitoring
Monitoring Approach: Continuous automated monitoring supplemented by monthly manual reviews and quarterly comprehensive audits.
Compliance Metrics:
| Metric | Target | Measurement Method | Frequency | Owner |
|---|---|---|---|---|
| Instructions Compliance Rate | 100% | % of AI systems used per instructions | Quarterly | AI Act Program Manager |
| Human Oversight Coverage | 100% | % of systems with assigned oversight | Quarterly | AI Act Program Manager |
| FRIA Completion Rate | 100% | % of required FRIAs completed | Quarterly | AI Act Program Manager |
| Worker Notification Rate | 100% | % of deployments with worker notification | Quarterly | HR Director |
| Incident Reporting Timeliness | 100% | % of incidents reported on time | Per incident | AI Act Program Manager |
| Log Retention Compliance | 100% | % of systems with compliant log retention | Quarterly | IT Director |
| Explanation Response Time | ≤30 days | Average response time for explanation requests | Quarterly | AI Act Program Manager |
Monitoring Tools:
- AI System Deployment Register
- Compliance Dashboard
- Monitoring and Alerting Systems
- Monthly compliance reports
- Quarterly AI Governance Committee reviews
5.2 Internal Audit Requirements
Audit Frequency: Annually (minimum)
Audit Scope:
- Deployer compliance with instructions for use
- Human oversight assignments and competency
- Monitoring and incident reporting effectiveness
- Log retention compliance
- Worker and affected person notifications
- FRIA completeness and quality
- Right to explanation process effectiveness
- Controls effectiveness (DEP-001 through DEP-007)
Audit Activities:
- Review 100% of high-risk AI system deployment records
- Sample 20% of human oversight assignments for competency verification
- Test monitoring and alerting systems
- Verify log retention and integrity
- Review worker notification records
- Assess FRIA quality and completeness
- Test explanation request process
Audit Outputs:
- Annual AI Deployer Obligations Audit Report
- Findings and recommendations
- Corrective action plans for deficiencies
5.3 External Audit / Regulatory Inspection
Preparation:
- Maintain audit-ready deployer documentation at all times
- Designate AI Act Program Manager and Legal as regulatory liaisons
- Prepare standard response procedures for authority requests
Provide to Auditors/Regulators:
- AI system deployment records
- Instructions for use and compliance documentation
- Human oversight assignment records
- Monitoring logs and incident reports
- Log retention evidence
- Worker notification records
- FRIA reports
- Explanation request and response records
- Internal audit reports
- Evidence of controls execution
Authority Request Response:
- Acknowledge request within 1 business day
- Provide requested documentation within 5 business days
- Coordinate through Legal and AI Act Program Manager
- Document all interactions with authorities
ROLES AND RESPONSIBILITIES
6.1 RACI Matrix
| Activity | AI Act Program Manager | HR Director | IT Director | Legal | AI Governance Committee |
|---|---|---|---|---|---|
| Use per Instructions | R/A | I | R | C | I |
| Human Oversight Assignment | R/A | C | I | C | I |
| Operational Monitoring | R/A | I | R | C | I |
| Log Retention | R | I | R/A | C | I |
| Worker Information | C | R/A | I | R | I |
| FRIA | R/A | C | C | R | A |
| Right to Explanation | R/A | C | C | R | I |
RACI Legend:
- R = Responsible (does the work)
- A = Accountable (ultimately answerable)
- C = Consulted (provides input)
- I = Informed (kept up-to-date)
6.2 Role Descriptions
AI Act Program Manager
- Primary Responsibility: Owns deployer obligations framework, ensures compliance with Articles 26, 27, and 86
- Key Activities:
- Manages deployer compliance program
- Oversees human oversight assignments
- Coordinates FRIA process
- Manages incident reporting
- Reports to AI Governance Committee
- Required Competencies: EU AI Act expertise, risk management, compliance management
HR Director
- Primary Responsibility: Manages worker information and notification obligations
- Key Activities:
- Identifies affected workers
- Coordinates notifications to workers' representatives
- Ensures compliance with employment law requirements
- Manages worker communication
- Required Competencies: Employment law, worker relations, communication management
IT Director
- Primary Responsibility: Manages technical implementation of deployer obligations
- Key Activities:
- Configures log retention systems
- Implements monitoring tools
- Ensures technical compliance with provider instructions
- Manages system configurations
- Required Competencies: IT management, system administration, data management
Legal
- Primary Responsibility: Provides legal guidance on deployer obligations and FRIA
- Key Activities:
- Reviews FRIA reports
- Advises on notification requirements
- Reviews explanation responses
- Manages regulatory authority interactions
- Required Competencies: EU AI Act, data protection law, fundamental rights
AI Governance Committee
- Primary Responsibility: Oversight and approval of deployer obligations program
- Key Activities:
- Approves FRIAs
- Reviews compliance reports
- Escalation authority for serious incidents
- Strategic oversight of deployer program
- Required Competencies: AI governance, strategic management, risk oversight
EXCEPTIONS
7.1 Exception Philosophy
Deployer obligations under the EU AI Act are mandatory legal requirements. Exceptions are granted extremely restrictively and only where compensating controls adequately mitigate risks while maintaining legal compliance.
7.2 Allowed Exceptions
The following exceptions may be granted with proper justification and approval:
| Exception Type | Justification Required | Maximum Duration | Approval Authority | Compensating Controls |
|---|---|---|---|---|
| Extended Implementation Timeline | Technical complexity prevents immediate implementation | 30 days | AI Act Program Manager | Interim manual controls; Accelerated plan |
| Alternative Monitoring Method | Alternative method equally effective | Permanent | AI Governance Committee | Document rationale; Effectiveness verification |
| Extended Log Retention | Technical migration in progress | 60 days | IT Director + AI Act Program Manager | Interim backup; Migration plan |
7.3 Prohibited Exceptions
The following exceptions cannot be granted under any circumstances:
- Skipping human oversight - Mandatory per Article 26(2), no exceptions
- Using AI system contrary to instructions - Mandatory per Article 26(1), no exceptions
- Skipping FRIA when required - Mandatory per Article 27, no exceptions
- Refusing explanation requests - Mandatory per Article 86, no exceptions
- Failing to report serious incidents - Mandatory per Article 26(5), no exceptions
- Deleting logs before minimum retention period - Mandatory per Article 26(6), no exceptions
7.4 Exception Request Process
Step 1: Submit Exception Request
- Complete Exception Request Form (FORM-AI-EXCEPTION-001)
- Include business justification
- Propose compensating controls
- Specify duration requested
- Attach risk assessment
Step 2: Risk Assessment
- AI Act Program Manager assesses risk of granting exception
- Legal reviews compliance implications
- Evaluates adequacy of compensating controls
- Documents residual risk
Step 3: Approval
- Route to appropriate approval authority based on exception type
- AI Act Program Manager approval: Minor operational exceptions
- AI Governance Committee approval: Significant exceptions
- AI Governance Committee + Legal: Exceptions with regulatory risk
Step 4: Documentation and Monitoring
- Document exception in Exception Register
- Assign exception owner
- Set review date
- Monitor compensating controls
- Report exceptions quarterly to AI Governance Committee
Step 5: Exception Review and Closure
- Review exception at specified review date
- Assess if exception still needed
- Close exception when normal compliance achieved
- Document lessons learned
ENFORCEMENT
8.1 Non-Compliance Consequences
| Violation | Severity | Consequence | Remediation Required |
|---|---|---|---|
| Using AI system contrary to instructions | Critical | Immediate suspension of AI system use | Comply with instructions before resuming |
| No human oversight assigned | Critical | Immediate suspension of AI system use | Assign oversight within 5 business days |
| Failure to conduct required FRIA | Critical | Immediate suspension of AI system deployment | Complete FRIA before deployment |
| Failure to report serious incident | Critical | Immediate escalation to Legal and AI Governance Committee | Report immediately; corrective action plan |
| Log retention non-compliance | High | Written warning; corrective action | Implement compliant retention within 10 business days |
| Worker notification not completed | High | Deployment suspended until notification completed | Complete notifications within 5 business days |
| Explanation request not fulfilled | High | Escalation to AI Act Program Manager | Provide explanation within 5 business days |
8.2 Escalation Procedures
Level 1: AI Act Program Manager
- Minor procedural violations
- Delays in implementation < 5 days
- Action: Written warning, corrective action required
Level 2: AI Act Program Manager + Legal
- Repeated violations
- Potential regulatory non-compliance
- Action: Formal review, corrective action plan, management notification
Level 3: AI Governance Committee
- Critical compliance failures
- Serious incident reporting failures
- FRIA non-completion
- Action: Immediate suspension, investigation, disciplinary action
Level 4: Executive Management + Legal
- Potential regulatory enforcement action
- Significant legal liability
- Reputational risk
- Action: Executive crisis management, legal strategy, regulatory engagement
8.3 Immediate Escalation Triggers
Escalate immediately to AI Governance Committee + Legal if:
- High-risk AI system used without human oversight
- Serious incident not reported to authorities
- FRIA requirement identified but not conducted before deployment
- Regulatory inquiry or inspection related to deployer obligations
- Evidence of fundamental rights harm from AI system use
8.4 Regulatory Penalties
Non-compliance with deployer obligations under Article 26 may result in administrative fines of up to EUR 15,000,000 or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
8.5 Disciplinary Actions
Individuals responsible for deployer obligation violations may be subject to:
- Verbal or written warning
- Mandatory retraining
- Performance improvement plan
- Reassignment of responsibilities
- Suspension (with pay during investigation)
- Termination (for egregious violations, e.g., knowingly deploying AI without required FRIA or oversight)
Factors Considered:
- Intent (knowing violation vs. honest mistake)
- Severity of violation
- Impact (actual or potential harm to affected persons)
- Cooperation with remediation
- Prior violation history
KEY PERFORMANCE INDICATORS (KPIs)
9.1 AI Deployer Obligations KPIs
| KPI ID | KPI Name | Definition | Target | Measurement Method | Frequency | Owner | Reporting To |
|---|---|---|---|---|---|---|---|
| KPI-DEP-001 | Instructions Compliance Rate | % of AI systems used in accordance with provider instructions | 100% | (# compliant / # total) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-DEP-002 | Human Oversight Coverage | % of high-risk AI systems with assigned human oversight personnel | 100% | (# with oversight / # total high-risk) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-DEP-003 | FRIA Completion Rate | % of required FRIAs completed before deployment | 100% | (# FRIAs completed / # required) x 100 | Quarterly | AI Act Program Manager | AI Governance Committee |
| KPI-DEP-004 | Worker Notification Rate | % of workplace AI deployments with worker notification completed | 100% | (# notified / # deployments) x 100 | Quarterly | HR Director | AI Governance Committee |
| KPI-DEP-005 | Incident Reporting Timeliness | % of serious incidents reported within required timeframes | 100% | (# on time / # total incidents) x 100 | Per incident | AI Act Program Manager | AI Governance Committee |
9.2 KPI Dashboards and Reporting
Real-Time Dashboard (AI Act Program Manager access)
- Current deployer compliance status per AI system
- Human oversight assignment status
- FRIA completion tracker
- Incident reporting status
- Log retention compliance
Monthly Management Report
- KPI-DEP-001, 002, 004, 005
- Trend analysis (vs. previous month)
- Issues and risks
- Planned actions
Quarterly AI Governance Committee Report
- All KPIs
- Deployer compliance assessment
- FRIA review
- Internal audit findings (if conducted)
- Exception register review
Annual Executive Report
- Full-year KPI performance
- Deployer obligations maturity assessment
- Strategic recommendations
- Regulatory outlook
9.3 KPI Thresholds and Alerts
| KPI | Green (Good) | Yellow (Warning) | Red (Critical) | Alert Action |
|---|---|---|---|---|
| Instructions Compliance Rate | 100% | 95-99% | < 95% | Red: Immediate escalation to AI Governance Committee Chair |
| Human Oversight Coverage | 100% | 95-99% | < 95% | Red: Immediate suspension of uncovered systems |
| FRIA Completion Rate | 100% | 90-99% | < 90% | Red: Deployment halt until FRIAs completed |
| Worker Notification Rate | 100% | 95-99% | < 95% | Yellow: Escalate to HR Director; Red: Suspend deployment |
| Incident Reporting Timeliness | 100% | 90-99% | < 90% | Red: Escalate to AI Governance Committee + Legal |
TRAINING REQUIREMENTS
10.1 Training Program Overview
All personnel involved in deploying or operating high-risk AI systems must complete role-specific training to ensure competency in deployer obligations under the EU AI Act.
10.2 Role-Based Training Requirements
| Role | Training Course | Duration | Content | Frequency | Assessment Required |
|---|---|---|---|---|---|
| AI Act Program Manager | Deployer Obligations Expert Training | 16 hours | Articles 26, 27, 86; FRIA methodology; Incident reporting | Initial + annually | Yes - Written exam (>=90%) |
| AI System Operators | Deployer Compliance Training | 8 hours | Instructions compliance; Monitoring; Log management; Incident reporting | Initial + annually | Yes - Written exam (>=80%) + Practical exercise |
| Human Oversight Personnel | Human Oversight Training | 12 hours | System-specific operation; Override procedures; Risk recognition; Decision review | Initial + per system + annually | Yes - Practical exercise + Scenario assessment |
| HR Director / HR Staff | Worker Notification Training | 4 hours | Worker information requirements; Notification procedures; Employment law | Initial + annually | Yes - Knowledge check (>=80%) |
| Legal | FRIA and Explanation Training | 8 hours | FRIA methodology; Right to explanation; Regulatory engagement | Initial + annually | Yes - Written exam (>=90%) |
| All Deployer Staff | Deployer Awareness Training | 2 hours | Deployer obligations overview; Incident escalation; Key contacts | At onboarding + annually | Yes - Knowledge check (>=80%) |
10.3 Training Content by Topic
Deployer Obligations Overview
- EU AI Act Article 26 requirements
- Deployer role and responsibilities
- Key compliance requirements
- Penalty framework
FRIA Methodology
- When FRIA is required (Article 27)
- FRIA process and content requirements
- Stakeholder identification and engagement
- Risk assessment for fundamental rights
Right to Explanation
- Article 86 requirements
- Explanation content and format
- Process for handling requests
- Quality standards for explanations
Incident Reporting
- Serious incident definition
- Reporting timeframes and procedures
- Authority notification requirements
- Documentation requirements
10.4 Training Delivery Methods
Initial Training:
- Instructor-led classroom or virtual training
- Includes interactive exercises and case studies
- Hands-on practice with monitoring tools and FRIA templates
- Group discussions of deployment scenarios
Annual Refresher:
- E-learning modules for core content review
- Live update sessions for regulatory changes
- Case study reviews of recent deployments and incidents
- Knowledge assessment
On-the-Job Training:
- Mentoring for new deployer staff
- Supervised deployment activities for first 3 deployments
- Shadowing during FRIA process
Just-in-Time Training:
- Quick reference guides and job aids
- Video tutorials on specific procedures
- Help desk support from experienced staff
10.5 Training Effectiveness Measurement
Assessment Methods:
- Written exams for knowledge retention
- Practical exercises for skill application
- Scenario-based assessments for decision-making
- On-the-job observations for competency validation
- Feedback surveys for training quality
Competency Validation:
- Human Oversight Personnel: Must demonstrate system-specific competency before independent oversight
- FRIA Leads: Must complete supervised FRIA before leading independently
- All deployer staff: Must pass knowledge assessments with minimum required scores
Training Metrics:
| Metric | Target | Frequency |
|---|---|---|
| Training completion rate | 100% | Quarterly |
| Assessment pass rate (first attempt) | >= 90% | Per training |
| Training effectiveness score (survey) | >= 4.0/5.0 | Per training |
| Time to competency (Oversight Personnel) | < 30 days | Per person |
10.6 Training Records
Records Maintained:
- Training attendance records
- Assessment scores
- Competency validations
- Refresher training completion
- Individual training transcripts
Retention: 10 years (to align with EU AI Act documentation retention)
Access: AI Act Program Manager, HR, Managers, Internal Audit, Competent Authorities (upon request)
DEFINITIONS
| Term | Definition | Source |
|---|---|---|
| Deployer | Any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity | EU AI Act Article 3(4) |
| High-Risk AI System | An AI system that falls within one of the categories listed in Annex III or meets the criteria in Article 6 | EU AI Act Article 6 |
| Instructions for Use | The information provided by the provider to inform the deployer of the intended purpose and proper use of the AI system | EU AI Act Article 13 |
| Human Oversight | Measures aimed at preventing or minimising the risks to health, safety, or fundamental rights that may emerge when a high-risk AI system is used | EU AI Act Article 14 |
| Fundamental Rights Impact Assessment (FRIA) | An assessment of the impact of the use of a high-risk AI system on the fundamental rights of persons likely to be affected | EU AI Act Article 27 |
| Serious Incident | An incident or malfunctioning of an AI system that directly or indirectly leads to death, serious damage to health, serious disruption of critical infrastructure, or breach of fundamental rights obligations | EU AI Act Article 3(49) |
| Market Surveillance Authority | The national authority responsible for market surveillance of AI systems | EU AI Act Article 70 |
| Right to Explanation | The right of affected persons to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedure | EU AI Act Article 86 |
LINK WITH AI ACT AND ISO42001
12.1 EU AI Act Regulatory Mapping
This standard implements the following EU AI Act requirements:
| EU AI Act Provision | Article | Requirement Summary | Implemented By (Controls) |
|---|---|---|---|
| Use per Instructions | Article 26(1) | Deployers shall use high-risk AI systems in accordance with instructions for use | DEP-001 |
| Human Oversight | Article 26(2) | Assign human oversight to competent natural persons | DEP-002 |
| Input Data | Article 26(4) | Ensure input data relevance and representativeness | DEP-001 |
| Monitoring and Reporting | Article 26(5) | Monitor operation and report risks/incidents | DEP-003 |
| Log Retention | Article 26(6) | Retain automatically generated logs for at least 6 months | DEP-004 |
| Worker Information | Article 26(7) | Inform workers' representatives and affected workers | DEP-005 |
| Affected Person Information | Article 26(11) | Inform natural persons subject to AI decisions | DEP-005 |
| FRIA | Article 27 | Conduct fundamental rights impact assessment | DEP-006 |
| Right to Explanation | Article 86 | Enable affected persons to obtain explanations | DEP-007 |
12.2 ISO/IEC 42001:2023 Alignment
This standard aligns with ISO/IEC 42001:2023 as follows:
| ISO 42001 Clause | Requirement | Implementation in This Standard |
|---|---|---|
| Clause 6.1: Actions to Address Risks | Risk assessment and mitigation | DEP-003, DEP-006 |
| Clause 7.2: Competence | Ensure personnel have appropriate competence | DEP-002 |
| Clause 7.4: Communication | Communication with interested parties | DEP-005, DEP-007 |
| Clause 8.1: Operational Planning | Plan and control operational processes | DEP-001, DEP-004 |
| Clause 9.1: Monitoring and Measurement | Monitor and measure performance | DEP-003 |
| Clause 10.2: Nonconformity and Corrective Action | Address nonconformities | DEP-003 |
12.3 Relationship to Other Standards
This deployer obligations standard integrates with other AI Act standards:
| Related Standard | Integration Point | Rationale |
|---|---|---|
| STD-AI-001: Classification | Risk classification determines deployer obligations | Deployer obligations apply to high-risk AI systems |
| STD-AI-002: Risk Management | Risk management feeds into FRIA and monitoring | Risk assessment methodology supports FRIA |
| STD-AI-007: Human Oversight | Human oversight requirements for deployers | Deployer assigns oversight per Article 26(2) |
| STD-AI-005: Logging | Log retention obligations for deployers | Deployer retains logs per Article 26(6) |
| STD-AI-006: Transparency | Transparency obligations inform notification requirements | Deployer provides information to affected persons |
| STD-AI-013: Incident Management | Incident reporting by deployers | Deployer reports serious incidents per Article 26(5) |
| STD-AI-014: Literacy and Training | Training for deployer personnel | Oversight personnel require competency per Article 26(2) |
| STD-AI-015: Supply Chain | Provider-deployer relationship management | Deployer receives instructions from provider |
12.4 References and Related Documents
EU AI Act (Regulation (EU) 2024/1689):
- Article 26: Obligations of deployers of high-risk AI systems
- Article 27: Fundamental rights impact assessment for high-risk AI systems
- Article 86: Right to explanation of individual decision-making
ISO/IEC Standards:
- ISO/IEC 42001:2023: Information technology - Artificial intelligence - Management system
Internal Documents:
- POL-AI-001: Artificial Intelligence Policy (parent policy)
- STD-AI-001: AI System Classification Standard
- STD-AI-002: AI Risk Management Standard
- STD-AI-005: AI Logging and Record-Keeping Standard
- STD-AI-006: AI Transparency Standard
- STD-AI-007: AI Human Oversight Standard
- STD-AI-013: AI Incident Management Standard
- STD-AI-014: AI Literacy and Training Standard
- STD-AI-015: AI Supply Chain Obligations Standard
- PROC-AI-DEP-001 through -006: Deployer obligations procedures
APPROVAL AND AUTHORIZATION
| Role | Name | Title | Signature | Date |
|---|---|---|---|---|
| Prepared By | AI Act Program Manager | AI Act Program Manager | _________________ | ________ |
| Reviewed By | Sarah Johnson | Legal Counsel | _________________ | ________ |
| Reviewed By | Jane Doe | Chief Strategy & Risk Officer | _________________ | ________ |
| Approved By | Jane Doe | AI Governance Committee Chair | _________________ | ________ |
Effective Date: 2026-08-02 Next Review Date: 2027-08-02 Review Frequency: Annually or upon regulatory change
END OF STANDARD STD-AI-016
This standard is a living document. Feedback and improvement suggestions should be directed to the AI Act Program Manager.
Standard ID
STD-AI-016
Version
1.0
Status
draftOwner
AI Act Program Manager
Effective Date
2026-08-02
Applicability
High-risk AI systems (deployers)