aicomply.
Lesson12 minChapter 10 of 14

Deployer Obligations

Requirements for organisations using high-risk AI systems.

Deployer Obligations (Article 26)

Learning Objectives

By the end of this chapter, you will be able to:

  • Understand the complete scope of deployer obligations under Article 26
  • Implement effective human oversight programmes for AI systems
  • Conduct fundamental rights impact assessments (FRIA)
  • Establish compliant log retention and monitoring practices
  • Navigate GDPR integration requirements for AI deployment
  • Manage worker information and consultation requirements

Deployers of high-risk AI systems have specific obligations under Article 26. While less extensive than provider obligations, these requirements are essential for ensuring high-risk AI is used safely, ethically, and in compliance with fundamental rights.

Understanding the Deployer Role

Who is a Deployer?

Article 3(4) Definition: A deployer is any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

Provider vs. Deployer Distinction

CriterionProviderDeployer
Core activityDevelops/places on marketUses under own authority
Compliance burdenExtensive (Articles 8-22)Focused (Article 26)
Pre-market obligationsYesNo
Post-market obligationsYesLimited
Typical examplesAI vendors, software companiesEnterprises, public authorities

When Deployers Become Providers

Article 25 triggers provider status when a deployer:

ActionResult
Places their name/trademark on high-risk AIBecomes provider for that system
Makes substantial modificationBecomes provider for modified system
Changes intended purpose to high-riskBecomes provider for new purpose

Compliance Note

Modifications to AI systems—even seemingly minor changes—may trigger provider obligations. Always assess modifications against Article 25 criteria before implementation.

Core Deployer Obligations (Article 26(1)-(7))

The Complete Obligation Framework

ObligationArticle ReferenceKey Requirements
Use per instructionsArticle 26(1)Follow provider's instructions for use
Human oversightArticle 26(2)Assign competent persons with authority
Input data relevanceArticle 26(4)Ensure data relevant to intended purpose
Operation monitoringArticle 26(5)Monitor per instructions, report incidents
Log retentionArticle 26(6)Keep logs minimum 6 months
Worker informationArticle 26(7)Inform workers before AI implementation
FRIA (if applicable)Article 27Conduct fundamental rights impact assessment
Public authority registrationArticle 26(8)Verify system is registered in EU database
Data protectionArticle 26(9)Comply with GDPR, conduct DPIA if required
Post-RBI authorisationArticle 26(10)Judicial/admin authorisation for post-biometric ID
Inform natural personsArticle 26(11)Inform persons subject to AI-assisted decisions
Authority cooperationArticle 26(12)Provide information, access, cooperation

Using AI Systems According to Instructions

Instruction Compliance Requirements

Article 26(1) requires deployers to take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions for use.

Instruction ElementDeployer Action
Intended purposeVerify deployment matches stated purpose
Operating environmentEnsure environment meets specifications
Input requirementsProvide data meeting quality criteria
User competencyTrain staff per provider requirements
LimitationsRespect stated system limitations
MaintenanceFollow maintenance and update procedures

Documenting Instruction Compliance

Maintain records demonstrating:

  • Instructions for use received and reviewed
  • Deployment context aligns with intended purpose
  • Operating environment meets specifications
  • Staff trained on system use
  • Limitations understood and communicated
  • Deviations from instructions (if any) documented and justified

💡 Best Practice: Create a deployment checklist based on the instructions for use. Complete this checklist before putting any high-risk AI system into service and retain it as compliance evidence.

Human Oversight Implementation

Article 26(2) Requirements

Deployers must assign human oversight to natural persons who have:

RequirementDescriptionEvidence
CompetenceNecessary skills and knowledgeTraining records, qualifications
TrainingSpecific training on the AI systemCompletion certificates, assessment results
AuthorityPower to override or halt AI systemRole descriptions, delegation records

Human Oversight Framework

Oversight Roles and Responsibilities:

RoleResponsibilitiesAuthority Level
Day-to-day OperatorMonitor outputs, flag anomaliesEscalate concerns
System SupervisorReview flagged cases, approve high-stakes decisionsOverride specific outputs
AI Governance LeadOversee compliance, manage incidentsSuspend system operation
Executive SponsorStrategic accountability, resource allocationTerminate deployment

Addressing Automation Bias (Article 14(4)(b), implemented by deployers per Article 26(1)-(2))

Article 14(4)(b) establishes the automation bias awareness requirement as a provider design obligation, requiring that high-risk AI systems include measures so that oversight persons are aware of the possible tendency to over-rely on AI outputs ("automation bias"). Deployers operationalise this through proper human oversight assignment under Article 26(1)-(2), ensuring oversight persons:

  • Are aware of the possible tendency to over-rely on AI outputs ("automation bias")
  • Are able to correctly interpret outputs in context
  • Are able to decide not to use the AI system or disregard its output

Automation Bias Mitigation Strategies:

StrategyImplementation
TrainingInclude automation bias awareness in all AI training
Dual reviewRequire independent human review for high-stakes decisions
Confidence displayShow AI confidence levels to users
Explanation provisionRequire AI to explain reasoning where possible
Regular calibrationCompare AI outputs against ground truth
Override trackingMonitor when and why humans override AI

Human Oversight Documentation

Maintain records of:

  • Oversight personnel assignments
  • Competency assessments
  • Training completion records
  • Override events and rationale
  • Escalation procedures followed
  • Regular performance reviews

Input Data Quality (Article 26(4))

Ensuring Input Data Relevance

Deployers must ensure input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.

Data Quality DimensionDeployer Responsibility
RelevanceInput data matches the system's intended use case
RepresentativenessData reflects the population/scenarios where AI is applied
CompletenessRequired data fields are populated
AccuracyData is correct and current
TimelinessData is sufficiently recent for the use case

Data Quality Monitoring

Implement ongoing data quality checks:

  • Define data quality criteria aligned with instructions for use
  • Establish data validation at point of entry
  • Monitor for data drift or distribution changes
  • Report data quality issues to provider if affecting system performance
  • Document data quality assessments and actions taken

Operation Monitoring (Article 26(5))

Monitoring Requirements

Deployers must monitor high-risk AI operation on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72(1).

Monitoring Framework:

Monitoring AspectFrequencyAction Triggers
Performance accuracyContinuous/periodicDeviation beyond threshold
Output qualityPer useAnomalous outputs
User feedbackOngoingComplaints or concerns
Incident detectionContinuousAny malfunction or harm
Compliance statusPeriodicAudit findings

Suspension Obligation (Article 26(5))

If the deployer has reason to believe the AI system presents a risk within the meaning of Article 79(1), they must without undue delay:

  1. Inform the provider or distributor and the relevant market surveillance authority
  2. Suspend use of the system until the risk is addressed

For serious incidents, deployers must:

  • Immediately inform the provider
  • Then inform the importer or distributor (where applicable)
  • Then inform the relevant market surveillance authorities
  • If the provider cannot be reached, Article 73 (serious incident reporting) applies directly to the deployer

⚠️ Note: Financial institutions subject to EU financial services law may satisfy the monitoring obligation through their existing regulatory compliance frameworks.

Incident Reporting to Providers

When monitoring reveals issues, deployers must inform providers if:

  • Performance degrades significantly
  • Unexpected outputs or behaviours occur
  • Users report problems or concerns
  • Incidents cause or risk harm

Log Retention (Article 26(6))

Minimum Retention Requirements

Log TypeMinimum PeriodNotes
Automatically generated logs6 months minimumUnless provided otherwise by EU or national law (GDPR data minimisation may require shorter retention in some cases; other law may require longer)
Appropriate to intended purposeAs specifiedProvider may specify longer periods
Under deployer controlThroughoutDeployer responsible for retention

Log Management Best Practices

  • Establish log retention policy aligned with AI Act and other regulations
  • Implement secure, tamper-evident log storage
  • Create log retrieval procedures for authority requests
  • Document log contents and format
  • Plan for log handover if AI system is transferred
  • Consider longer retention where fundamental rights implicated

Compliance Note

GDPR may require data minimisation while AI Act requires log retention. Resolve this tension by ensuring logs contain minimum necessary personal data and implementing appropriate access controls.

Worker Information Requirements (Article 26(7))

Mandatory Information Provision

Before putting a high-risk AI system into use at the workplace, deployers who are employers must inform:

  • Workers' representatives and the affected workers that they will be subject to the use of the high-risk AI system

💡 Note: Article 26(7) requires notification of the fact of AI use — that workers will be subject to the system. The Act does not prescribe detailed information elements beyond this notification. However, providing additional context (intended purpose, human oversight, how to raise concerns) is good practice and may be required by national employment law.

Works Council and Union Consultation

Where applicable, integrate AI deployment into existing consultation frameworks:

  • Inform works councils in advance of deployment
  • Consult on impact assessments
  • Address concerns raised by worker representatives
  • Update collective agreements if needed

Fundamental Rights Impact Assessment (Article 27)

When FRIA is Required

Article 27 mandates FRIA for deployers that are:

Deployer TypeRequirement
Bodies governed by public lawMandatory FRIA
Private entities providing public servicesMandatory FRIA
Credit institutions (creditworthiness assessment — Annex III point 5(b))Mandatory FRIA
Life/health insurance (risk assessment/pricing — Annex III point 5(c))Mandatory FRIA

⚠️ Exception: FRIA is not required for high-risk AI systems intended to be used in the area listed in point 2 of Annex III (critical infrastructure management and operation).

FRIA Content Requirements

The impact assessment must contain:

ElementArticle 27(1)Description
Deployer's processes(a)Description of the deployer's processes in which the AI system will be used
Period and frequency(b)Period of time within which, and the frequency with which, the AI system is intended to be used
Categories of affected persons(c)Categories of natural persons and groups likely to be affected by its use
Specific risks of harm(d)Specific risks of harm likely to affect the identified categories of persons or groups
Human oversight measures(e)Description of the implementation of human oversight measures
Risk response, governance, and complaints(f)Measures to be taken in case of materialisation of risks, including arrangements for internal governance and complaint mechanisms

FRIA Process Framework

Phase 1: Scoping

  • Identify AI system and intended deployment
  • Determine affected fundamental rights
  • Identify categories of affected persons

Phase 2: Risk Assessment

  • Analyse potential impacts on each affected right
  • Consider direct and indirect effects
  • Assess likelihood and severity of harms

Phase 3: Mitigation Design

  • Develop measures to address identified risks
  • Design human oversight implementation
  • Create complaints and redress mechanisms

Phase 4: Documentation and Notification

  • Document complete assessment
  • All deployers required to conduct FRIA must notify the relevant market surveillance authority of the results, submitting a filled-out template (Article 27(3), (5)) — this is not limited to public-sector entities
  • Deployers may rely on previously conducted FRIAs or existing impact assessments for similar cases (Article 27(2)), but must update if elements change
  • Review and update periodically

💡 Expert Tip: Where a DPIA under GDPR Article 35 has already been conducted, the FRIA shall complement (not replace) that assessment (Article 27(4)). Integrate both assessments, but ensure the FRIA covers fundamental rights dimensions beyond data protection.

GDPR Integration (Article 26(9)-(10))

DPIA Requirements

Article 26(9) requires deployers to use provider-supplied information to conduct Data Protection Impact Assessments (DPIA) where required by GDPR Article 35.

DPIA Triggers for AI:

TriggerAI Context
Systematic profilingAI-based profiling affecting individuals
Large-scale sensitive dataAI processing biometric, health data
Systematic monitoringAI surveillance of public spaces
New technologiesNovel AI applications with uncertain impacts

GDPR Controller Responsibilities

Where deploying AI that processes personal data:

  • Identify legal basis for processing
  • Implement data minimisation
  • Ensure data subject rights can be exercised
  • Conduct DPIA where required
  • Document processing activities
  • Implement appropriate security measures

Public Authority Registration (Article 26(8))

Deployers that are public authorities or EU institutions, bodies, offices, or agencies must:

  • Comply with registration obligations under Article 49
  • Verify the high-risk AI system is registered in the EU database (Article 71)
  • Not use a system that is not registered in the EU database

Compliance Note

This obligation applies only to public-sector deployers, not private-sector deployers.

Informing Natural Persons (Article 26(11))

Deployers of Annex III high-risk AI systems that make or assist in decisions about natural persons must inform those persons that they are subject to the use of a high-risk AI system. This applies to all deployers, not just public authorities.

💡 Note: This is distinct from the worker information requirement in Article 26(7). Article 26(11) applies to any natural person affected by an AI-assisted decision, not just employees.

Post-Remote Biometric Identification (Article 26(10))

Deployers using high-risk AI for post-remote biometric identification in law enforcement must obtain:

  • Prior authorisation from a judicial authority or independent administrative authority (or within 48 hours in urgent cases)
  • If authorisation is rejected, all data and results must be immediately deleted
  • Use must not be untargeted — must relate to a specifically targeted individual
  • Each use must be documented in the relevant police file
  • Annual reporting to national authorities is required

Authority Cooperation (Article 26(12))

Cooperation Obligations

Deployers must cooperate with market surveillance authorities including:

RequirementDeployer Action
Information provisionProvide requested information within timeframes
AccessGrant access to AI system, logs, documentation
Testing supportFacilitate technical testing if required
Corrective actionImplement required corrective measures

Preparing for Authority Requests

Maintain readiness by:

  • Designating authority liaison contact
  • Establishing document retrieval procedures
  • Testing access provision capabilities
  • Training relevant staff on cooperation requirements

Compliance Checklist: Deployer Obligations

Pre-Deployment:

  • Review instructions for use thoroughly
  • Verify deployment aligns with intended purpose
  • Assign human oversight personnel
  • Train staff on AI system use
  • Conduct FRIA (if required)
  • Conduct DPIA (if required)
  • Inform workers and representatives

Operational:

  • Monitor AI performance per instructions
  • Maintain log retention systems
  • Track human oversight activities
  • Address data quality issues
  • Report incidents to providers

Ongoing:

  • Review and update FRIA/DPIA periodically
  • Refresh staff training
  • Assess for substantial modifications
  • Maintain authority cooperation readiness

What You Learned

Key concepts from this chapter

Deployers must use high-risk AI strictly according to provider instructions for use

Human oversight requires persons with competence, training, and authority to override AI

Automation bias awareness is an explicit requirement—train staff to recognise over-reliance risks

Log retention minimum is 6 months; longer may be required for other regulatory purposes

Workers must be informed before AI deployment in the workplace

Chapter Complete

High-Risk AI Compliance

10/14

chapters