aicomply.
STD-AI-016

AI Deployer Obligations Standard

Requirements for deployers of high-risk AI systems under Articles 26, 27, and 86.

7

Controls

0

Compliant

0

In Progress

7

Not Started

Overall Progress
0%
Implementation Guidance
Detailed guidance for implementing this standard

AI Deployer Obligations Standard

Document Type: Standard Standard ID: STD-AI-016 Standard Title: AI Deployer Obligations Standard Version: 1.0 Effective Date: 2026-08-02 Next Review Date: 2027-08-02 Review Frequency: Annually or upon regulatory change Parent Policy: POL-AI-001 - Artificial Intelligence Policy Owner: AI Act Program Manager Approved By: AI Governance Committee Chair Status: Draft Classification: Internal Use Only


TABLE OF CONTENTS

  1. Document History
  2. Objective
  3. Scope and Applicability
  4. Control Standard
  5. Supporting Procedures
  6. Compliance
  7. Roles and Responsibilities
  8. Exceptions
  9. Enforcement
  10. Key Performance Indicators (KPIs)
  11. Training Requirements
  12. Definitions
  13. Link with AI Act and ISO42001

DOCUMENT HISTORY

VersionDateAuthorChangesApproval DateApproved By
0.12026-07-01AI Act Program ManagerInitial draft--
0.22026-07-15AI Act Program ManagerAdded FRIA and right to explanation controls--
0.32026-07-25AI Act Program ManagerIncorporated legal review feedback--
1.02026-08-02AI Act Program ManagerFinal version approved2026-08-01Jane Doe, AI Governance Committee Chair

OBJECTIVE

This standard defines requirements for deployers of high-risk AI systems under EU AI Act Articles 26, 27, and 86. It ensures that organisations deploying high-risk AI systems fulfil their obligations regarding use in accordance with instructions, human oversight, operational monitoring, log retention, worker information, fundamental rights impact assessment, and the right to explanation.

Primary Goals:

  • Ensure high-risk AI systems are used in accordance with provider instructions
  • Assign competent human oversight for all high-risk AI systems
  • Establish monitoring, risk reporting, and incident management processes
  • Retain automatically generated logs for the required period
  • Inform workers and affected persons about AI system use
  • Conduct fundamental rights impact assessments before deployment
  • Enable affected persons to obtain explanations of AI-based decisions

SCOPE AND APPLICABILITY

2.1 Mandatory Applicability

This standard is mandatory for:

  • All high-risk AI systems deployed by the organisation
  • All personnel responsible for deploying or operating high-risk AI systems
  • All AI systems used in the workplace that affect workers
  • All AI systems whose output influences decisions affecting natural persons

2.2 Recommended Applicability

This standard is recommended for:

  • Limited-risk AI systems used in operational decisions
  • AI systems not classified as high-risk but used in sensitive contexts
  • Third-party AI systems integrated into business processes

2.3 Deployer Obligations Covered

  • Use in accordance with instructions (Article 26(1))
  • Human oversight assignment (Article 26(2))
  • Input data relevance and representativeness (Article 26(4))
  • Operational monitoring and risk reporting (Article 26(5))
  • Log retention (Article 26(6))
  • Worker and affected person information (Article 26(7), Article 26(11))
  • Fundamental rights impact assessment (Article 27)
  • Right to explanation (Article 86)

2.4 Out of Scope

  • Provider obligations (covered by STD-AI-015)
  • AI system development and training (covered by provider standards)
  • General-purpose AI model obligations (covered by STD-AI-015)
  • Non-AI automated systems

CONTROL STANDARD

Control DEP-001: Use in Accordance with Instructions

Control ID: DEP-001 Control Name: Use in Accordance with Provider Instructions Control Type: Preventive Control Frequency: Per deployment, ongoing Risk Level: High

Control Objective

Ensure all high-risk AI systems are used in accordance with the provider's instructions for use, including technical and organisational measures, and that input data is relevant and sufficiently representative as required by Article 26(1) and Article 26(4).

Control Requirements

CR-001.1: Instruction Review and Implementation

Obtain, review, and implement provider instructions for use before deploying any high-risk AI system.

Instruction Compliance Checklist:

RequirementDescriptionArticle ReferenceVerification
Obtain InstructionsReceive instructions for use from providerArticle 26(1)Instructions on file
Review InstructionsAssess technical and organisational requirementsArticle 26(1)Review record signed
Implement Technical MeasuresConfigure system per technical instructionsArticle 26(1)Configuration verified
Implement Organisational MeasuresEstablish processes per organisational instructionsArticle 26(1)Processes documented
Assess Input DataEnsure input data relevance and representativenessArticle 26(4)Data assessment completed
Document ComplianceRecord all compliance activitiesArticle 26(1)Compliance record maintained

Mandatory Actions:

  • Obtain and review provider instructions for use before deployment
  • Implement technical and organisational measures per provider instructions
  • Ensure input data relevance and representativeness per Article 26(4)
  • Document compliance with instructions for use
  • Review compliance upon system updates or instruction changes
  • Maintain records of all instructions for use received

CR-001.2: Input Data Management

Ensure input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

Input Data Requirements:

RequirementDescriptionVerification MethodFrequency
RelevanceInput data must be relevant to the AI system's intended purposeData relevance assessmentPer deployment
RepresentativenessInput data must be sufficiently representativeStatistical analysisPer deployment, periodic review
QualityInput data must meet quality standards defined by providerData quality checksOngoing
CurrencyInput data must be current and up to dateData freshness verificationOngoing

Evidence Required:

  • Instructions for use from provider
  • Deployment checklists
  • Input data relevance assessments
  • Compliance verification records
  • System configuration documentation
  • Update review records

Audit Verification:

  • Verify instructions for use obtained before deployment
  • Confirm technical and organisational measures implemented
  • Check input data assessments completed
  • Validate compliance documentation maintained

Control DEP-002: Human Oversight Assignment

Control ID: DEP-002 Control Name: Human Oversight Assignment and Competency Control Type: Preventive Control Frequency: Per deployment, ongoing review Risk Level: Critical

Control Objective

Assign human oversight of high-risk AI systems to natural persons who have the necessary competence, training, authority, and support to fulfil that role effectively, as required by Article 26(2).

Control Requirements

CR-002.1: Oversight Personnel Identification and Assignment

Identify and assign qualified natural persons to provide human oversight for each high-risk AI system.

Oversight Assignment Requirements:

RequirementDescriptionArticle ReferenceVerification
Identify RolesDefine oversight roles for each AI systemArticle 26(2)Role descriptions documented
Assess CompetencyVerify personnel have necessary competenceArticle 26(2)Competency assessments completed
Provide TrainingTrain oversight personnel on AI system and limitationsArticle 26(2)Training certificates on file
Grant AuthorityProvide authority to override or interveneArticle 26(2)Authority delegation documented
Ensure SupportProvide adequate resources and toolsArticle 26(2)Resource allocation confirmed
Document AssignmentsRecord all assignments and assessmentsArticle 26(2)Assignment records maintained

Competency Requirements for Oversight Personnel:

Competency AreaDescriptionAssessment MethodMinimum Standard
AI System KnowledgeUnderstanding of the specific AI system's operationWritten assessment≥90%
Limitations AwarenessKnowledge of system limitations and failure modesScenario-based assessmentPass/fail
Override ProceduresAbility to intervene and override system decisionsPractical exerciseDemonstrated competency
Risk RecognitionAbility to identify risks and anomaliesCase study analysis≥90%
Regulatory AwarenessUnderstanding of EU AI Act deployer obligationsKnowledge check≥80%

Mandatory Actions:

  • Identify human oversight roles for each high-risk AI system
  • Verify competency of designated oversight personnel
  • Provide adequate training on AI system operation and limitations
  • Grant override and intervention authority to oversight personnel
  • Ensure adequate resources and support for oversight functions
  • Document all oversight assignments and competency assessments

Evidence Required:

  • Human oversight assignment records
  • Competency assessment reports
  • Training certificates for oversight personnel
  • Authority delegation documents
  • Resource allocation records
  • Oversight role descriptions

Audit Verification:

  • Verify oversight personnel assigned for all high-risk AI systems
  • Confirm competency assessments completed and passed
  • Check training certificates valid and current
  • Validate authority delegation documents signed
  • Verify adequate resources allocated

Control DEP-003: Operational Monitoring and Risk Reporting

Control ID: DEP-003 Control Name: Operational Monitoring and Risk Reporting Control Type: Detective Control Frequency: Continuous monitoring, per incident Risk Level: High

Control Objective

Monitor the operation of high-risk AI systems on the basis of the instructions for use and report risks, serious incidents, and malfunctions to providers and relevant authorities as required by Article 26(5).

Control Requirements

CR-003.1: Monitoring Framework

Implement monitoring processes aligned with provider instructions for use.

Monitoring Requirements:

Monitoring AreaDescriptionMethodFrequencyThreshold
System PerformanceMonitor AI system performance metricsAutomated monitoringContinuousPer provider instructions
Output QualityAssess quality and accuracy of AI outputsSampling and reviewDaily/weeklyQuality thresholds defined
Anomaly DetectionDetect unusual patterns or outputsAutomated alertsContinuousAlert thresholds defined
Risk IndicatorsMonitor risk indicators per instructionsDashboard monitoringContinuousRisk thresholds defined
User FeedbackCollect and analyse user feedbackFeedback mechanismsOngoingTrend analysis

CR-003.2: Incident Reporting and Escalation

Report serious incidents, malfunctions, and risks to providers and authorities.

Incident Reporting Requirements:

Incident TypeReport ToTimeframeArticle Reference
Serious IncidentProvider + Market Surveillance AuthorityImmediately upon identificationArticle 26(5)
Risk to Health/SafetyProvider + Suspend useImmediatelyArticle 26(5)
Risk to Fundamental RightsProvider + Suspend useImmediatelyArticle 26(5)
MalfunctionProviderWithin 24 hoursArticle 26(5)
Performance DegradationProviderWithin 72 hoursArticle 26(5)

Escalation Procedure:

LevelTriggerActionDecision Authority
Level 1Performance anomaly detectedInvestigate and documentAI System Operator
Level 2Confirmed malfunction or riskNotify provider, assess severityAI Act Program Manager
Level 3Serious incident or risk to personsSuspend use, notify authoritiesAI Governance Committee
Level 4Imminent risk to health/safetyImmediate cessation, emergency notificationExecutive Management

Mandatory Actions:

  • Implement monitoring processes aligned with provider instructions
  • Define risk thresholds and alert mechanisms
  • Establish escalation procedures for identified risks
  • Suspend or cease use of AI system when risk to health, safety, or fundamental rights is identified
  • Report serious incidents to provider and market surveillance authority
  • Maintain comprehensive monitoring and incident records

Evidence Required:

  • Monitoring logs and dashboards
  • Risk assessment reports
  • Incident notification records
  • Suspension and cessation records
  • Escalation procedure documentation
  • Authority communication records

Audit Verification:

  • Verify monitoring processes implemented per instructions
  • Confirm risk thresholds defined and active
  • Check incident reports submitted within required timeframes
  • Validate suspension/cessation decisions documented
  • Verify authority notifications completed

Control DEP-004: Log Retention

Control ID: DEP-004 Control Name: Automatic Log Retention and Management Control Type: Preventive Control Frequency: Continuous, periodic review Risk Level: Medium

Control Objective

Retain automatically generated logs of high-risk AI systems for a period appropriate to the intended purpose and of at least six months, as required by Article 26(6), unless provided for in applicable Union or national law.

Control Requirements

CR-004.1: Log Storage and Retention

Configure and maintain log storage for all high-risk AI systems.

Log Retention Requirements:

RequirementDescriptionMinimum StandardVerification
Retention PeriodMinimum retention of automatically generated logs6 months (or longer per applicable law)Retention policy documented
Log CompletenessAll automatically generated logs must be retained100% of logs capturedLog capture verification
Log IntegrityLogs must be protected against tamperingIntegrity controls implementedIntegrity checks passed
Log AccessibilityLogs must be accessible to authorities on requestAccess procedures definedAccess test completed
Storage SecurityLogs must be stored securelyEncryption and access controlsSecurity audit passed

Log Types to Retain:

Log TypeDescriptionRetention PeriodStorage
System LogsAutomatically generated operational logsMinimum 6 monthsSecure storage
Decision LogsAI system decision outputs and parametersMinimum 6 monthsSecure storage
Input LogsInput data processed by the AI systemMinimum 6 monthsSecure storage
Error LogsSystem errors, warnings, and anomaliesMinimum 6 monthsSecure storage
Access LogsUser access and interaction logsMinimum 6 monthsSecure storage

Mandatory Actions:

  • Configure log storage for all high-risk AI systems
  • Implement retention policies of at least 6 months
  • Ensure log integrity and protection against tampering
  • Provide logs to market surveillance authorities on request
  • Document log retention policies and storage configurations
  • Conduct periodic verification of log availability and integrity

Evidence Required:

  • Log storage configuration documentation
  • Retention policy documents
  • Log integrity verification records
  • Authority access and request logs
  • Storage capacity monitoring records
  • Periodic review reports

Audit Verification:

  • Verify log storage configured for all high-risk AI systems
  • Confirm retention period meets minimum 6-month requirement
  • Check log integrity controls in place
  • Validate authority access procedures tested
  • Verify periodic reviews conducted

Control DEP-005: Worker and Affected Person Information

Control ID: DEP-005 Control Name: Worker and Affected Person Information and Notification Control Type: Preventive Control Frequency: Per deployment, ongoing Risk Level: High

Control Objective

Inform workers' representatives and affected workers before putting a high-risk AI system into use in the workplace (Article 26(7)), and inform natural persons subject to AI-assisted decisions that they are subject to the use of the high-risk AI system (Article 26(11)).

Control Requirements

CR-005.1: Worker Information (Article 26(7))

Inform workers' representatives and affected workers before deploying high-risk AI systems in the workplace.

Worker Notification Requirements:

RequirementDescriptionTimingRecipient
Identify Affected WorkersDetermine which workers are affected by AI deploymentBefore deploymentInternal assessment
Notify RepresentativesInform workers' representatives per applicable lawBefore deploymentWorkers' representatives
Provide System InformationExplain what the AI system does, how it works, and its impactBefore deploymentAll affected workers
Explain RightsInform workers of their rights regarding AI system useBefore deploymentAll affected workers
Document NotificationsRecord all notifications and acknowledgmentsAt notificationCompliance records
Update NotificationsProvide updated information when system use changesUpon material changeAll affected workers

CR-005.2: Affected Person Information (Article 26(11))

Inform natural persons subject to AI-assisted decisions.

Affected Person Notification Requirements:

RequirementDescriptionTimingMethod
Identify Affected PersonsDetermine natural persons subject to AI decisionsBefore useProcess mapping
Provide NoticeInform persons they are subject to AI system useBefore or at point of decisionClear, accessible notice
Explain AI RoleDescribe the role of the AI system in the decisionAt point of decisionWritten or electronic communication
Inform of RightsAdvise of right to explanation (Article 86)At point of decisionWritten or electronic communication

Mandatory Actions:

  • Identify all affected workers and workers' representatives
  • Notify workers' representatives before workplace AI deployment
  • Provide clear and accessible information about AI system use and its implications
  • Inform natural persons subject to AI-based decisions per Article 26(11)
  • Document all notifications and acknowledgments
  • Update notifications when AI system use changes materially

Evidence Required:

  • Worker notification records
  • Workers' representative communication records
  • Acknowledgment and receipt records
  • Information materials provided
  • Natural person notification records
  • Update notification records

Audit Verification:

  • Verify worker notifications completed before deployment
  • Confirm workers' representatives informed per applicable law
  • Check affected person notifications provided at point of decision
  • Validate acknowledgment records maintained
  • Verify update notifications issued upon material changes

Control DEP-006: Fundamental Rights Impact Assessment

Control ID: DEP-006 Control Name: Fundamental Rights Impact Assessment (FRIA) Control Type: Preventive Control Frequency: Before first deployment, upon material changes Risk Level: Critical

Control Objective

Conduct a fundamental rights impact assessment (FRIA) before putting a high-risk AI system into use, as required by Article 27, for public bodies and private entities operating in specified sectors.

Control Requirements

CR-006.1: FRIA Scope and Triggers

Determine when a FRIA is required and define its scope.

FRIA Mandatory Triggers:

TriggerDescriptionArticle Reference
Public Body DeploymentAny high-risk AI deployment by a public bodyArticle 27(1)
Banking/InsurancePrivate entities providing banking or insurance servicesArticle 27(1)
Critical InfrastructurePrivate entities operating critical infrastructureArticle 27(1)
Material ChangeSignificant change to AI system use or contextArticle 27(4)
New AI SystemFirst deployment of a high-risk AI systemArticle 27(1)

CR-006.2: FRIA Content Requirements

Complete all required elements of the FRIA as specified in Article 27.

FRIA Required Content:

ElementDescriptionArticle ReferenceDetail Required
Process DescriptionDescribe deployer processes where AI will be usedArticle 27(3)(a)Detailed process mapping
Period and FrequencyDefine deployment period and frequency of useArticle 27(3)(b)Start date, duration, frequency
Affected PersonsIdentify categories of persons and groups likely to be affectedArticle 27(3)(c)Comprehensive stakeholder mapping
Specific RisksAssess specific risks of harm to identified persons and groupsArticle 27(3)(d)Risk analysis per affected group
Human OversightDocument human oversight measuresArticle 27(3)(e)Oversight implementation details
Risk MitigationDefine risk mitigation and governance measuresArticle 27(3)(f)Mitigation plan with responsibilities
Authority NotificationNotify market surveillance authority of FRIA resultsArticle 27(4)Notification record

FRIA Process Steps:

StepActivityResponsibilityOutput
1. ScopingDefine FRIA scope and methodologyAI Act Program ManagerFRIA scope document
2. Stakeholder MappingIdentify affected persons and groupsFRIA TeamStakeholder register
3. Risk AssessmentAssess fundamental rights risksFRIA TeamRisk assessment report
4. Impact AnalysisAnalyse potential impacts on fundamental rightsFRIA Team + LegalImpact analysis
5. Mitigation PlanningDefine measures to mitigate identified risksFRIA TeamMitigation plan
6. Review and ApprovalReview FRIA for completeness and accuracyAI Governance CommitteeApproved FRIA
7. Authority NotificationNotify market surveillance authorityAI Act Program ManagerNotification record
8. MonitoringMonitor effectiveness of mitigation measuresAI Act Program ManagerMonitoring reports

Mandatory Actions:

  • Describe deployer processes in which the AI system will be used
  • Define the period of time and frequency of intended use
  • Identify categories of natural persons and groups likely to be affected
  • Assess specific risks of harm to identified persons and groups
  • Document human oversight measures and their implementation
  • Define risk mitigation and governance measures
  • Notify the market surveillance authority of FRIA results
  • Review and update FRIA upon material changes

Evidence Required:

  • Completed FRIA reports
  • Market surveillance authority notification records
  • Risk mitigation plans
  • Stakeholder consultation records
  • Human oversight implementation documentation
  • FRIA update and review records

Audit Verification:

  • Verify FRIA completed before deployment for all required systems
  • Confirm all Article 27(3) elements addressed
  • Check market surveillance authority notified
  • Validate risk mitigation measures implemented
  • Verify FRIA updated upon material changes

Control DEP-007: Right to Explanation

Control ID: DEP-007 Control Name: Right to Explanation for AI-Based Decisions Control Type: Preventive Control Frequency: Per request, ongoing Risk Level: High

Control Objective

Enable any affected person subject to a decision taken by the deployer on the basis of the output from a high-risk AI system to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken, as required by Article 86.

Control Requirements

CR-007.1: Explanation Mechanisms

Implement mechanisms to provide clear and meaningful explanations of AI-based decisions.

Explanation Requirements:

RequirementDescriptionArticle ReferenceStandard
Role of AI SystemExplain the role of the AI system in the decisionArticle 86(1)Clear and understandable
Main Decision ElementsDescribe the main elements of the decision takenArticle 86(1)Comprehensive
Meaningful ExplanationExplanation must be sufficiently detailed and understandableArticle 86(1)Plain language
Timely ResponseProvide explanation within reasonable timeframeArticle 86Within 30 days of request
Accessible FormatExplanation must be accessible to the affected personArticle 86Appropriate format

Explanation Content Template:

ElementDescriptionExample
Decision SummaryWhat decision was made"Your application was [approved/rejected]"
AI System RoleHow the AI system contributed"The AI system assessed [factors] and provided a [score/recommendation]"
Key FactorsMain elements that influenced the decision"The primary factors considered were [list factors]"
Human InvolvementRole of human oversight in the decision"A human reviewer [reviewed/confirmed/overrode] the AI recommendation"
Rights InformationInformation about further recourse"You have the right to [appeal/complain] by [method]"

CR-007.2: Explanation Process

Establish and maintain processes for handling explanation requests.

Explanation Process Steps:

StepActivityTimeframeResponsibility
1. Receive RequestLog explanation requestDay 0Customer Service / Contact Point
2. AcknowledgeConfirm receipt of requestWithin 5 business daysCustomer Service
3. Gather InformationCollect decision details and AI system outputsDays 1-10AI System Operator
4. Prepare ExplanationDraft clear and meaningful explanationDays 10-20AI Act Program Manager
5. ReviewReview explanation for accuracy and clarityDays 20-25Legal
6. DeliverProvide explanation to affected personWithin 30 daysCustomer Service
7. RecordDocument explanation providedAt deliveryCompliance

Mandatory Actions:

  • Implement mechanisms to provide clear and meaningful explanations of AI-based decisions
  • Train staff on providing explanations to affected persons
  • Document all explanation requests and explanations provided
  • Maintain explanation records for audit and compliance purposes
  • Ensure explanations are accessible and understandable to affected persons
  • Establish processes for handling explanation requests within reasonable timeframes

Evidence Required:

  • Explanation request logs
  • Explanation records and responses
  • Staff training records on explanation provision
  • Explanation process documentation
  • Response time tracking records
  • Affected person feedback records

Audit Verification:

  • Verify explanation mechanisms implemented
  • Confirm staff trained on providing explanations
  • Check explanation requests handled within required timeframes
  • Validate explanation records complete and accurate
  • Verify explanations clear, meaningful, and accessible

SUPPORTING PROCEDURES

This standard is implemented through the following detailed procedures:

Procedure PROC-AI-DEP-001: Deployer Compliance Procedure

Purpose: Define step-by-step process for deployer compliance with instructions for use Owner: AI Act Program Manager Implements: Controls DEP-001, DEP-004

Procedure Steps:

  1. Obtain instructions for use from provider - Control DEP-001
  2. Review and document compliance requirements
  3. Implement technical and organisational measures
  4. Assess input data relevance and representativeness
  5. Configure log retention - Control DEP-004
  6. Document compliance and maintain records

Outputs:

  • Deployment compliance checklists
  • Input data assessments
  • Log retention configurations
  • Compliance records

Procedure PROC-AI-DEP-002: Human Oversight Procedure

Purpose: Define process for assigning and maintaining human oversight Owner: AI Act Program Manager Implements: Control DEP-002

Procedure Steps:

  1. Identify oversight roles per AI system
  2. Assess candidate competency
  3. Provide required training
  4. Formally assign oversight responsibilities
  5. Grant override and intervention authority
  6. Monitor oversight effectiveness

Outputs:

  • Oversight assignment records
  • Competency assessments
  • Training certificates
  • Authority delegation documents

Procedure PROC-AI-DEP-003: Monitoring and Incident Reporting Procedure

Purpose: Define process for operational monitoring and incident reporting Owner: AI Act Program Manager Implements: Control DEP-003

Procedure Steps:

  1. Implement monitoring per provider instructions
  2. Define risk thresholds and alerts
  3. Detect and investigate anomalies
  4. Escalate per escalation procedure
  5. Report serious incidents to provider and authorities
  6. Suspend or cease use when required

Outputs:

  • Monitoring dashboards
  • Incident reports
  • Authority notifications
  • Suspension/cessation records

Procedure PROC-AI-DEP-004: Worker and Affected Person Notification Procedure

Purpose: Define process for notifying workers and affected persons Owner: HR Director Implements: Control DEP-005

Procedure Steps:

  1. Identify affected workers and representatives
  2. Prepare notification materials
  3. Deliver notifications before deployment
  4. Collect acknowledgments
  5. Inform natural persons at point of decision
  6. Update notifications upon material changes

Outputs:

  • Notification records
  • Acknowledgment records
  • Communication materials

Procedure PROC-AI-DEP-005: Fundamental Rights Impact Assessment Procedure

Purpose: Define process for conducting FRIAs Owner: AI Act Program Manager Implements: Control DEP-006

Procedure Steps:

  1. Determine FRIA requirement and scope
  2. Map stakeholders and affected persons
  3. Assess fundamental rights risks
  4. Analyse potential impacts
  5. Define mitigation measures
  6. Submit for review and approval
  7. Notify market surveillance authority
  8. Monitor and update

Outputs:

  • FRIA reports
  • Authority notification records
  • Mitigation plans
  • Monitoring reports

Procedure PROC-AI-DEP-006: Right to Explanation Procedure

Purpose: Define process for handling explanation requests Owner: AI Act Program Manager Implements: Control DEP-007

Procedure Steps:

  1. Receive and log explanation request
  2. Acknowledge request within 5 business days
  3. Gather decision details and AI system outputs
  4. Prepare clear and meaningful explanation
  5. Review explanation for accuracy
  6. Deliver explanation within 30 days
  7. Record and archive

Outputs:

  • Explanation request logs
  • Explanation records
  • Response time tracking

COMPLIANCE

5.1 Compliance Monitoring

Monitoring Approach: Continuous automated monitoring supplemented by monthly manual reviews and quarterly comprehensive audits.

Compliance Metrics:

MetricTargetMeasurement MethodFrequencyOwner
Instructions Compliance Rate100%% of AI systems used per instructionsQuarterlyAI Act Program Manager
Human Oversight Coverage100%% of systems with assigned oversightQuarterlyAI Act Program Manager
FRIA Completion Rate100%% of required FRIAs completedQuarterlyAI Act Program Manager
Worker Notification Rate100%% of deployments with worker notificationQuarterlyHR Director
Incident Reporting Timeliness100%% of incidents reported on timePer incidentAI Act Program Manager
Log Retention Compliance100%% of systems with compliant log retentionQuarterlyIT Director
Explanation Response Time≤30 daysAverage response time for explanation requestsQuarterlyAI Act Program Manager

Monitoring Tools:

  • AI System Deployment Register
  • Compliance Dashboard
  • Monitoring and Alerting Systems
  • Monthly compliance reports
  • Quarterly AI Governance Committee reviews

5.2 Internal Audit Requirements

Audit Frequency: Annually (minimum)

Audit Scope:

  • Deployer compliance with instructions for use
  • Human oversight assignments and competency
  • Monitoring and incident reporting effectiveness
  • Log retention compliance
  • Worker and affected person notifications
  • FRIA completeness and quality
  • Right to explanation process effectiveness
  • Controls effectiveness (DEP-001 through DEP-007)

Audit Activities:

  • Review 100% of high-risk AI system deployment records
  • Sample 20% of human oversight assignments for competency verification
  • Test monitoring and alerting systems
  • Verify log retention and integrity
  • Review worker notification records
  • Assess FRIA quality and completeness
  • Test explanation request process

Audit Outputs:

  • Annual AI Deployer Obligations Audit Report
  • Findings and recommendations
  • Corrective action plans for deficiencies

5.3 External Audit / Regulatory Inspection

Preparation:

  • Maintain audit-ready deployer documentation at all times
  • Designate AI Act Program Manager and Legal as regulatory liaisons
  • Prepare standard response procedures for authority requests

Provide to Auditors/Regulators:

  • AI system deployment records
  • Instructions for use and compliance documentation
  • Human oversight assignment records
  • Monitoring logs and incident reports
  • Log retention evidence
  • Worker notification records
  • FRIA reports
  • Explanation request and response records
  • Internal audit reports
  • Evidence of controls execution

Authority Request Response:

  • Acknowledge request within 1 business day
  • Provide requested documentation within 5 business days
  • Coordinate through Legal and AI Act Program Manager
  • Document all interactions with authorities

ROLES AND RESPONSIBILITIES

6.1 RACI Matrix

ActivityAI Act Program ManagerHR DirectorIT DirectorLegalAI Governance Committee
Use per InstructionsR/AIRCI
Human Oversight AssignmentR/ACICI
Operational MonitoringR/AIRCI
Log RetentionRIR/ACI
Worker InformationCR/AIRI
FRIAR/ACCRA
Right to ExplanationR/ACCRI

RACI Legend:

  • R = Responsible (does the work)
  • A = Accountable (ultimately answerable)
  • C = Consulted (provides input)
  • I = Informed (kept up-to-date)

6.2 Role Descriptions

AI Act Program Manager

  • Primary Responsibility: Owns deployer obligations framework, ensures compliance with Articles 26, 27, and 86
  • Key Activities:
    • Manages deployer compliance program
    • Oversees human oversight assignments
    • Coordinates FRIA process
    • Manages incident reporting
    • Reports to AI Governance Committee
  • Required Competencies: EU AI Act expertise, risk management, compliance management

HR Director

  • Primary Responsibility: Manages worker information and notification obligations
  • Key Activities:
    • Identifies affected workers
    • Coordinates notifications to workers' representatives
    • Ensures compliance with employment law requirements
    • Manages worker communication
  • Required Competencies: Employment law, worker relations, communication management

IT Director

  • Primary Responsibility: Manages technical implementation of deployer obligations
  • Key Activities:
    • Configures log retention systems
    • Implements monitoring tools
    • Ensures technical compliance with provider instructions
    • Manages system configurations
  • Required Competencies: IT management, system administration, data management

Legal

  • Primary Responsibility: Provides legal guidance on deployer obligations and FRIA
  • Key Activities:
    • Reviews FRIA reports
    • Advises on notification requirements
    • Reviews explanation responses
    • Manages regulatory authority interactions
  • Required Competencies: EU AI Act, data protection law, fundamental rights

AI Governance Committee

  • Primary Responsibility: Oversight and approval of deployer obligations program
  • Key Activities:
    • Approves FRIAs
    • Reviews compliance reports
    • Escalation authority for serious incidents
    • Strategic oversight of deployer program
  • Required Competencies: AI governance, strategic management, risk oversight

EXCEPTIONS

7.1 Exception Philosophy

Deployer obligations under the EU AI Act are mandatory legal requirements. Exceptions are granted extremely restrictively and only where compensating controls adequately mitigate risks while maintaining legal compliance.


7.2 Allowed Exceptions

The following exceptions may be granted with proper justification and approval:

Exception TypeJustification RequiredMaximum DurationApproval AuthorityCompensating Controls
Extended Implementation TimelineTechnical complexity prevents immediate implementation30 daysAI Act Program ManagerInterim manual controls; Accelerated plan
Alternative Monitoring MethodAlternative method equally effectivePermanentAI Governance CommitteeDocument rationale; Effectiveness verification
Extended Log RetentionTechnical migration in progress60 daysIT Director + AI Act Program ManagerInterim backup; Migration plan

7.3 Prohibited Exceptions

The following exceptions cannot be granted under any circumstances:

  • Skipping human oversight - Mandatory per Article 26(2), no exceptions
  • Using AI system contrary to instructions - Mandatory per Article 26(1), no exceptions
  • Skipping FRIA when required - Mandatory per Article 27, no exceptions
  • Refusing explanation requests - Mandatory per Article 86, no exceptions
  • Failing to report serious incidents - Mandatory per Article 26(5), no exceptions
  • Deleting logs before minimum retention period - Mandatory per Article 26(6), no exceptions

7.4 Exception Request Process

Step 1: Submit Exception Request

  • Complete Exception Request Form (FORM-AI-EXCEPTION-001)
  • Include business justification
  • Propose compensating controls
  • Specify duration requested
  • Attach risk assessment

Step 2: Risk Assessment

  • AI Act Program Manager assesses risk of granting exception
  • Legal reviews compliance implications
  • Evaluates adequacy of compensating controls
  • Documents residual risk

Step 3: Approval

  • Route to appropriate approval authority based on exception type
  • AI Act Program Manager approval: Minor operational exceptions
  • AI Governance Committee approval: Significant exceptions
  • AI Governance Committee + Legal: Exceptions with regulatory risk

Step 4: Documentation and Monitoring

  • Document exception in Exception Register
  • Assign exception owner
  • Set review date
  • Monitor compensating controls
  • Report exceptions quarterly to AI Governance Committee

Step 5: Exception Review and Closure

  • Review exception at specified review date
  • Assess if exception still needed
  • Close exception when normal compliance achieved
  • Document lessons learned

ENFORCEMENT

8.1 Non-Compliance Consequences

ViolationSeverityConsequenceRemediation Required
Using AI system contrary to instructionsCriticalImmediate suspension of AI system useComply with instructions before resuming
No human oversight assignedCriticalImmediate suspension of AI system useAssign oversight within 5 business days
Failure to conduct required FRIACriticalImmediate suspension of AI system deploymentComplete FRIA before deployment
Failure to report serious incidentCriticalImmediate escalation to Legal and AI Governance CommitteeReport immediately; corrective action plan
Log retention non-complianceHighWritten warning; corrective actionImplement compliant retention within 10 business days
Worker notification not completedHighDeployment suspended until notification completedComplete notifications within 5 business days
Explanation request not fulfilledHighEscalation to AI Act Program ManagerProvide explanation within 5 business days

8.2 Escalation Procedures

Level 1: AI Act Program Manager

  • Minor procedural violations
  • Delays in implementation < 5 days
  • Action: Written warning, corrective action required

Level 2: AI Act Program Manager + Legal

  • Repeated violations
  • Potential regulatory non-compliance
  • Action: Formal review, corrective action plan, management notification

Level 3: AI Governance Committee

  • Critical compliance failures
  • Serious incident reporting failures
  • FRIA non-completion
  • Action: Immediate suspension, investigation, disciplinary action

Level 4: Executive Management + Legal

  • Potential regulatory enforcement action
  • Significant legal liability
  • Reputational risk
  • Action: Executive crisis management, legal strategy, regulatory engagement

8.3 Immediate Escalation Triggers

Escalate immediately to AI Governance Committee + Legal if:

  • High-risk AI system used without human oversight
  • Serious incident not reported to authorities
  • FRIA requirement identified but not conducted before deployment
  • Regulatory inquiry or inspection related to deployer obligations
  • Evidence of fundamental rights harm from AI system use

8.4 Regulatory Penalties

Non-compliance with deployer obligations under Article 26 may result in administrative fines of up to EUR 15,000,000 or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.


8.5 Disciplinary Actions

Individuals responsible for deployer obligation violations may be subject to:

  • Verbal or written warning
  • Mandatory retraining
  • Performance improvement plan
  • Reassignment of responsibilities
  • Suspension (with pay during investigation)
  • Termination (for egregious violations, e.g., knowingly deploying AI without required FRIA or oversight)

Factors Considered:

  • Intent (knowing violation vs. honest mistake)
  • Severity of violation
  • Impact (actual or potential harm to affected persons)
  • Cooperation with remediation
  • Prior violation history

KEY PERFORMANCE INDICATORS (KPIs)

9.1 AI Deployer Obligations KPIs

KPI IDKPI NameDefinitionTargetMeasurement MethodFrequencyOwnerReporting To
KPI-DEP-001Instructions Compliance Rate% of AI systems used in accordance with provider instructions100%(# compliant / # total) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-DEP-002Human Oversight Coverage% of high-risk AI systems with assigned human oversight personnel100%(# with oversight / # total high-risk) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-DEP-003FRIA Completion Rate% of required FRIAs completed before deployment100%(# FRIAs completed / # required) x 100QuarterlyAI Act Program ManagerAI Governance Committee
KPI-DEP-004Worker Notification Rate% of workplace AI deployments with worker notification completed100%(# notified / # deployments) x 100QuarterlyHR DirectorAI Governance Committee
KPI-DEP-005Incident Reporting Timeliness% of serious incidents reported within required timeframes100%(# on time / # total incidents) x 100Per incidentAI Act Program ManagerAI Governance Committee

9.2 KPI Dashboards and Reporting

Real-Time Dashboard (AI Act Program Manager access)

  • Current deployer compliance status per AI system
  • Human oversight assignment status
  • FRIA completion tracker
  • Incident reporting status
  • Log retention compliance

Monthly Management Report

  • KPI-DEP-001, 002, 004, 005
  • Trend analysis (vs. previous month)
  • Issues and risks
  • Planned actions

Quarterly AI Governance Committee Report

  • All KPIs
  • Deployer compliance assessment
  • FRIA review
  • Internal audit findings (if conducted)
  • Exception register review

Annual Executive Report

  • Full-year KPI performance
  • Deployer obligations maturity assessment
  • Strategic recommendations
  • Regulatory outlook

9.3 KPI Thresholds and Alerts

KPIGreen (Good)Yellow (Warning)Red (Critical)Alert Action
Instructions Compliance Rate100%95-99%< 95%Red: Immediate escalation to AI Governance Committee Chair
Human Oversight Coverage100%95-99%< 95%Red: Immediate suspension of uncovered systems
FRIA Completion Rate100%90-99%< 90%Red: Deployment halt until FRIAs completed
Worker Notification Rate100%95-99%< 95%Yellow: Escalate to HR Director; Red: Suspend deployment
Incident Reporting Timeliness100%90-99%< 90%Red: Escalate to AI Governance Committee + Legal

TRAINING REQUIREMENTS

10.1 Training Program Overview

All personnel involved in deploying or operating high-risk AI systems must complete role-specific training to ensure competency in deployer obligations under the EU AI Act.


10.2 Role-Based Training Requirements

RoleTraining CourseDurationContentFrequencyAssessment Required
AI Act Program ManagerDeployer Obligations Expert Training16 hoursArticles 26, 27, 86; FRIA methodology; Incident reportingInitial + annuallyYes - Written exam (>=90%)
AI System OperatorsDeployer Compliance Training8 hoursInstructions compliance; Monitoring; Log management; Incident reportingInitial + annuallyYes - Written exam (>=80%) + Practical exercise
Human Oversight PersonnelHuman Oversight Training12 hoursSystem-specific operation; Override procedures; Risk recognition; Decision reviewInitial + per system + annuallyYes - Practical exercise + Scenario assessment
HR Director / HR StaffWorker Notification Training4 hoursWorker information requirements; Notification procedures; Employment lawInitial + annuallyYes - Knowledge check (>=80%)
LegalFRIA and Explanation Training8 hoursFRIA methodology; Right to explanation; Regulatory engagementInitial + annuallyYes - Written exam (>=90%)
All Deployer StaffDeployer Awareness Training2 hoursDeployer obligations overview; Incident escalation; Key contactsAt onboarding + annuallyYes - Knowledge check (>=80%)

10.3 Training Content by Topic

Deployer Obligations Overview

  • EU AI Act Article 26 requirements
  • Deployer role and responsibilities
  • Key compliance requirements
  • Penalty framework

FRIA Methodology

  • When FRIA is required (Article 27)
  • FRIA process and content requirements
  • Stakeholder identification and engagement
  • Risk assessment for fundamental rights

Right to Explanation

  • Article 86 requirements
  • Explanation content and format
  • Process for handling requests
  • Quality standards for explanations

Incident Reporting

  • Serious incident definition
  • Reporting timeframes and procedures
  • Authority notification requirements
  • Documentation requirements

10.4 Training Delivery Methods

Initial Training:

  • Instructor-led classroom or virtual training
  • Includes interactive exercises and case studies
  • Hands-on practice with monitoring tools and FRIA templates
  • Group discussions of deployment scenarios

Annual Refresher:

  • E-learning modules for core content review
  • Live update sessions for regulatory changes
  • Case study reviews of recent deployments and incidents
  • Knowledge assessment

On-the-Job Training:

  • Mentoring for new deployer staff
  • Supervised deployment activities for first 3 deployments
  • Shadowing during FRIA process

Just-in-Time Training:

  • Quick reference guides and job aids
  • Video tutorials on specific procedures
  • Help desk support from experienced staff

10.5 Training Effectiveness Measurement

Assessment Methods:

  • Written exams for knowledge retention
  • Practical exercises for skill application
  • Scenario-based assessments for decision-making
  • On-the-job observations for competency validation
  • Feedback surveys for training quality

Competency Validation:

  • Human Oversight Personnel: Must demonstrate system-specific competency before independent oversight
  • FRIA Leads: Must complete supervised FRIA before leading independently
  • All deployer staff: Must pass knowledge assessments with minimum required scores

Training Metrics:

MetricTargetFrequency
Training completion rate100%Quarterly
Assessment pass rate (first attempt)>= 90%Per training
Training effectiveness score (survey)>= 4.0/5.0Per training
Time to competency (Oversight Personnel)< 30 daysPer person

10.6 Training Records

Records Maintained:

  • Training attendance records
  • Assessment scores
  • Competency validations
  • Refresher training completion
  • Individual training transcripts

Retention: 10 years (to align with EU AI Act documentation retention)

Access: AI Act Program Manager, HR, Managers, Internal Audit, Competent Authorities (upon request)


DEFINITIONS

TermDefinitionSource
DeployerAny natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activityEU AI Act Article 3(4)
High-Risk AI SystemAn AI system that falls within one of the categories listed in Annex III or meets the criteria in Article 6EU AI Act Article 6
Instructions for UseThe information provided by the provider to inform the deployer of the intended purpose and proper use of the AI systemEU AI Act Article 13
Human OversightMeasures aimed at preventing or minimising the risks to health, safety, or fundamental rights that may emerge when a high-risk AI system is usedEU AI Act Article 14
Fundamental Rights Impact Assessment (FRIA)An assessment of the impact of the use of a high-risk AI system on the fundamental rights of persons likely to be affectedEU AI Act Article 27
Serious IncidentAn incident or malfunctioning of an AI system that directly or indirectly leads to death, serious damage to health, serious disruption of critical infrastructure, or breach of fundamental rights obligationsEU AI Act Article 3(49)
Market Surveillance AuthorityThe national authority responsible for market surveillance of AI systemsEU AI Act Article 70
Right to ExplanationThe right of affected persons to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedureEU AI Act Article 86

LINK WITH AI ACT AND ISO42001

12.1 EU AI Act Regulatory Mapping

This standard implements the following EU AI Act requirements:

EU AI Act ProvisionArticleRequirement SummaryImplemented By (Controls)
Use per InstructionsArticle 26(1)Deployers shall use high-risk AI systems in accordance with instructions for useDEP-001
Human OversightArticle 26(2)Assign human oversight to competent natural personsDEP-002
Input DataArticle 26(4)Ensure input data relevance and representativenessDEP-001
Monitoring and ReportingArticle 26(5)Monitor operation and report risks/incidentsDEP-003
Log RetentionArticle 26(6)Retain automatically generated logs for at least 6 monthsDEP-004
Worker InformationArticle 26(7)Inform workers' representatives and affected workersDEP-005
Affected Person InformationArticle 26(11)Inform natural persons subject to AI decisionsDEP-005
FRIAArticle 27Conduct fundamental rights impact assessmentDEP-006
Right to ExplanationArticle 86Enable affected persons to obtain explanationsDEP-007

12.2 ISO/IEC 42001:2023 Alignment

This standard aligns with ISO/IEC 42001:2023 as follows:

ISO 42001 ClauseRequirementImplementation in This Standard
Clause 6.1: Actions to Address RisksRisk assessment and mitigationDEP-003, DEP-006
Clause 7.2: CompetenceEnsure personnel have appropriate competenceDEP-002
Clause 7.4: CommunicationCommunication with interested partiesDEP-005, DEP-007
Clause 8.1: Operational PlanningPlan and control operational processesDEP-001, DEP-004
Clause 9.1: Monitoring and MeasurementMonitor and measure performanceDEP-003
Clause 10.2: Nonconformity and Corrective ActionAddress nonconformitiesDEP-003

12.3 Relationship to Other Standards

This deployer obligations standard integrates with other AI Act standards:

Related StandardIntegration PointRationale
STD-AI-001: ClassificationRisk classification determines deployer obligationsDeployer obligations apply to high-risk AI systems
STD-AI-002: Risk ManagementRisk management feeds into FRIA and monitoringRisk assessment methodology supports FRIA
STD-AI-007: Human OversightHuman oversight requirements for deployersDeployer assigns oversight per Article 26(2)
STD-AI-005: LoggingLog retention obligations for deployersDeployer retains logs per Article 26(6)
STD-AI-006: TransparencyTransparency obligations inform notification requirementsDeployer provides information to affected persons
STD-AI-013: Incident ManagementIncident reporting by deployersDeployer reports serious incidents per Article 26(5)
STD-AI-014: Literacy and TrainingTraining for deployer personnelOversight personnel require competency per Article 26(2)
STD-AI-015: Supply ChainProvider-deployer relationship managementDeployer receives instructions from provider

12.4 References and Related Documents

EU AI Act (Regulation (EU) 2024/1689):

  • Article 26: Obligations of deployers of high-risk AI systems
  • Article 27: Fundamental rights impact assessment for high-risk AI systems
  • Article 86: Right to explanation of individual decision-making

ISO/IEC Standards:

  • ISO/IEC 42001:2023: Information technology - Artificial intelligence - Management system

Internal Documents:

  • POL-AI-001: Artificial Intelligence Policy (parent policy)
  • STD-AI-001: AI System Classification Standard
  • STD-AI-002: AI Risk Management Standard
  • STD-AI-005: AI Logging and Record-Keeping Standard
  • STD-AI-006: AI Transparency Standard
  • STD-AI-007: AI Human Oversight Standard
  • STD-AI-013: AI Incident Management Standard
  • STD-AI-014: AI Literacy and Training Standard
  • STD-AI-015: AI Supply Chain Obligations Standard
  • PROC-AI-DEP-001 through -006: Deployer obligations procedures

APPROVAL AND AUTHORIZATION

RoleNameTitleSignatureDate
Prepared ByAI Act Program ManagerAI Act Program Manager_________________________
Reviewed BySarah JohnsonLegal Counsel_________________________
Reviewed ByJane DoeChief Strategy & Risk Officer_________________________
Approved ByJane DoeAI Governance Committee Chair_________________________

Effective Date: 2026-08-02 Next Review Date: 2027-08-02 Review Frequency: Annually or upon regulatory change


END OF STANDARD STD-AI-016


This standard is a living document. Feedback and improvement suggestions should be directed to the AI Act Program Manager.

Standard Details

Standard ID

STD-AI-016

Version

1.0

Status

draft

Owner

AI Act Program Manager

Effective Date

2026-08-02

Applicability

High-risk AI systems (deployers)

EU AI Act References
Article 26Article 27Article 86