Deployer Obligations
Requirements for organisations using high-risk AI systems.
Deployer Obligations (Article 26)
Learning Objectives
By the end of this chapter, you will be able to:
- Understand the complete scope of deployer obligations under Article 26
- Implement effective human oversight programmes for AI systems
- Conduct fundamental rights impact assessments (FRIA)
- Establish compliant log retention and monitoring practices
- Navigate GDPR integration requirements for AI deployment
- Manage worker information and consultation requirements
Deployers of high-risk AI systems have specific obligations under Article 26. While less extensive than provider obligations, these requirements are essential for ensuring high-risk AI is used safely, ethically, and in compliance with fundamental rights.
Understanding the Deployer Role
Who is a Deployer?
Article 3(4) Definition: A deployer is any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
Provider vs. Deployer Distinction
| Criterion | Provider | Deployer |
|---|---|---|
| Core activity | Develops/places on market | Uses under own authority |
| Compliance burden | Extensive (Articles 8-22) | Focused (Article 26) |
| Pre-market obligations | Yes | No |
| Post-market obligations | Yes | Limited |
| Typical examples | AI vendors, software companies | Enterprises, public authorities |
When Deployers Become Providers
Article 25 triggers provider status when a deployer:
| Action | Result |
|---|---|
| Places their name/trademark on high-risk AI | Becomes provider for that system |
| Makes substantial modification | Becomes provider for modified system |
| Changes intended purpose to high-risk | Becomes provider for new purpose |
Compliance Note
Modifications to AI systems—even seemingly minor changes—may trigger provider obligations. Always assess modifications against Article 25 criteria before implementation.
Core Deployer Obligations (Article 26(1)-(7))
The Complete Obligation Framework
| Obligation | Article Reference | Key Requirements |
|---|---|---|
| Use per instructions | Article 26(1) | Follow provider's instructions for use |
| Human oversight | Article 26(2) | Assign competent persons with authority |
| Input data relevance | Article 26(4) | Ensure data relevant to intended purpose |
| Operation monitoring | Article 26(5) | Monitor per instructions, report incidents |
| Log retention | Article 26(6) | Keep logs minimum 6 months |
| Worker information | Article 26(7) | Inform workers before AI implementation |
| FRIA (if applicable) | Article 27 | Conduct fundamental rights impact assessment |
| Public authority registration | Article 26(8) | Verify system is registered in EU database |
| Data protection | Article 26(9) | Comply with GDPR, conduct DPIA if required |
| Post-RBI authorisation | Article 26(10) | Judicial/admin authorisation for post-biometric ID |
| Inform natural persons | Article 26(11) | Inform persons subject to AI-assisted decisions |
| Authority cooperation | Article 26(12) | Provide information, access, cooperation |
Using AI Systems According to Instructions
Instruction Compliance Requirements
Article 26(1) requires deployers to take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions for use.
| Instruction Element | Deployer Action |
|---|---|
| Intended purpose | Verify deployment matches stated purpose |
| Operating environment | Ensure environment meets specifications |
| Input requirements | Provide data meeting quality criteria |
| User competency | Train staff per provider requirements |
| Limitations | Respect stated system limitations |
| Maintenance | Follow maintenance and update procedures |
Documenting Instruction Compliance
Maintain records demonstrating:
- Instructions for use received and reviewed
- Deployment context aligns with intended purpose
- Operating environment meets specifications
- Staff trained on system use
- Limitations understood and communicated
- Deviations from instructions (if any) documented and justified
💡 Best Practice: Create a deployment checklist based on the instructions for use. Complete this checklist before putting any high-risk AI system into service and retain it as compliance evidence.
Human Oversight Implementation
Article 26(2) Requirements
Deployers must assign human oversight to natural persons who have:
| Requirement | Description | Evidence |
|---|---|---|
| Competence | Necessary skills and knowledge | Training records, qualifications |
| Training | Specific training on the AI system | Completion certificates, assessment results |
| Authority | Power to override or halt AI system | Role descriptions, delegation records |
Human Oversight Framework
Oversight Roles and Responsibilities:
| Role | Responsibilities | Authority Level |
|---|---|---|
| Day-to-day Operator | Monitor outputs, flag anomalies | Escalate concerns |
| System Supervisor | Review flagged cases, approve high-stakes decisions | Override specific outputs |
| AI Governance Lead | Oversee compliance, manage incidents | Suspend system operation |
| Executive Sponsor | Strategic accountability, resource allocation | Terminate deployment |
Addressing Automation Bias (Article 14(4)(b), implemented by deployers per Article 26(1)-(2))
Article 14(4)(b) establishes the automation bias awareness requirement as a provider design obligation, requiring that high-risk AI systems include measures so that oversight persons are aware of the possible tendency to over-rely on AI outputs ("automation bias"). Deployers operationalise this through proper human oversight assignment under Article 26(1)-(2), ensuring oversight persons:
- Are aware of the possible tendency to over-rely on AI outputs ("automation bias")
- Are able to correctly interpret outputs in context
- Are able to decide not to use the AI system or disregard its output
Automation Bias Mitigation Strategies:
| Strategy | Implementation |
|---|---|
| Training | Include automation bias awareness in all AI training |
| Dual review | Require independent human review for high-stakes decisions |
| Confidence display | Show AI confidence levels to users |
| Explanation provision | Require AI to explain reasoning where possible |
| Regular calibration | Compare AI outputs against ground truth |
| Override tracking | Monitor when and why humans override AI |
Human Oversight Documentation
Maintain records of:
- Oversight personnel assignments
- Competency assessments
- Training completion records
- Override events and rationale
- Escalation procedures followed
- Regular performance reviews
Input Data Quality (Article 26(4))
Ensuring Input Data Relevance
Deployers must ensure input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
| Data Quality Dimension | Deployer Responsibility |
|---|---|
| Relevance | Input data matches the system's intended use case |
| Representativeness | Data reflects the population/scenarios where AI is applied |
| Completeness | Required data fields are populated |
| Accuracy | Data is correct and current |
| Timeliness | Data is sufficiently recent for the use case |
Data Quality Monitoring
Implement ongoing data quality checks:
- Define data quality criteria aligned with instructions for use
- Establish data validation at point of entry
- Monitor for data drift or distribution changes
- Report data quality issues to provider if affecting system performance
- Document data quality assessments and actions taken
Operation Monitoring (Article 26(5))
Monitoring Requirements
Deployers must monitor high-risk AI operation on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72(1).
Monitoring Framework:
| Monitoring Aspect | Frequency | Action Triggers |
|---|---|---|
| Performance accuracy | Continuous/periodic | Deviation beyond threshold |
| Output quality | Per use | Anomalous outputs |
| User feedback | Ongoing | Complaints or concerns |
| Incident detection | Continuous | Any malfunction or harm |
| Compliance status | Periodic | Audit findings |
Suspension Obligation (Article 26(5))
If the deployer has reason to believe the AI system presents a risk within the meaning of Article 79(1), they must without undue delay:
- Inform the provider or distributor and the relevant market surveillance authority
- Suspend use of the system until the risk is addressed
For serious incidents, deployers must:
- Immediately inform the provider
- Then inform the importer or distributor (where applicable)
- Then inform the relevant market surveillance authorities
- If the provider cannot be reached, Article 73 (serious incident reporting) applies directly to the deployer
⚠️ Note: Financial institutions subject to EU financial services law may satisfy the monitoring obligation through their existing regulatory compliance frameworks.
Incident Reporting to Providers
When monitoring reveals issues, deployers must inform providers if:
- Performance degrades significantly
- Unexpected outputs or behaviours occur
- Users report problems or concerns
- Incidents cause or risk harm
Log Retention (Article 26(6))
Minimum Retention Requirements
| Log Type | Minimum Period | Notes |
|---|---|---|
| Automatically generated logs | 6 months minimum | Unless provided otherwise by EU or national law (GDPR data minimisation may require shorter retention in some cases; other law may require longer) |
| Appropriate to intended purpose | As specified | Provider may specify longer periods |
| Under deployer control | Throughout | Deployer responsible for retention |
Log Management Best Practices
- Establish log retention policy aligned with AI Act and other regulations
- Implement secure, tamper-evident log storage
- Create log retrieval procedures for authority requests
- Document log contents and format
- Plan for log handover if AI system is transferred
- Consider longer retention where fundamental rights implicated
Compliance Note
GDPR may require data minimisation while AI Act requires log retention. Resolve this tension by ensuring logs contain minimum necessary personal data and implementing appropriate access controls.
Worker Information Requirements (Article 26(7))
Mandatory Information Provision
Before putting a high-risk AI system into use at the workplace, deployers who are employers must inform:
- Workers' representatives and the affected workers that they will be subject to the use of the high-risk AI system
💡 Note: Article 26(7) requires notification of the fact of AI use — that workers will be subject to the system. The Act does not prescribe detailed information elements beyond this notification. However, providing additional context (intended purpose, human oversight, how to raise concerns) is good practice and may be required by national employment law.
Works Council and Union Consultation
Where applicable, integrate AI deployment into existing consultation frameworks:
- Inform works councils in advance of deployment
- Consult on impact assessments
- Address concerns raised by worker representatives
- Update collective agreements if needed
Fundamental Rights Impact Assessment (Article 27)
When FRIA is Required
Article 27 mandates FRIA for deployers that are:
| Deployer Type | Requirement |
|---|---|
| Bodies governed by public law | Mandatory FRIA |
| Private entities providing public services | Mandatory FRIA |
| Credit institutions (creditworthiness assessment — Annex III point 5(b)) | Mandatory FRIA |
| Life/health insurance (risk assessment/pricing — Annex III point 5(c)) | Mandatory FRIA |
⚠️ Exception: FRIA is not required for high-risk AI systems intended to be used in the area listed in point 2 of Annex III (critical infrastructure management and operation).
FRIA Content Requirements
The impact assessment must contain:
| Element | Article 27(1) | Description |
|---|---|---|
| Deployer's processes | (a) | Description of the deployer's processes in which the AI system will be used |
| Period and frequency | (b) | Period of time within which, and the frequency with which, the AI system is intended to be used |
| Categories of affected persons | (c) | Categories of natural persons and groups likely to be affected by its use |
| Specific risks of harm | (d) | Specific risks of harm likely to affect the identified categories of persons or groups |
| Human oversight measures | (e) | Description of the implementation of human oversight measures |
| Risk response, governance, and complaints | (f) | Measures to be taken in case of materialisation of risks, including arrangements for internal governance and complaint mechanisms |
FRIA Process Framework
Phase 1: Scoping
- Identify AI system and intended deployment
- Determine affected fundamental rights
- Identify categories of affected persons
Phase 2: Risk Assessment
- Analyse potential impacts on each affected right
- Consider direct and indirect effects
- Assess likelihood and severity of harms
Phase 3: Mitigation Design
- Develop measures to address identified risks
- Design human oversight implementation
- Create complaints and redress mechanisms
Phase 4: Documentation and Notification
- Document complete assessment
- All deployers required to conduct FRIA must notify the relevant market surveillance authority of the results, submitting a filled-out template (Article 27(3), (5)) — this is not limited to public-sector entities
- Deployers may rely on previously conducted FRIAs or existing impact assessments for similar cases (Article 27(2)), but must update if elements change
- Review and update periodically
💡 Expert Tip: Where a DPIA under GDPR Article 35 has already been conducted, the FRIA shall complement (not replace) that assessment (Article 27(4)). Integrate both assessments, but ensure the FRIA covers fundamental rights dimensions beyond data protection.
GDPR Integration (Article 26(9)-(10))
DPIA Requirements
Article 26(9) requires deployers to use provider-supplied information to conduct Data Protection Impact Assessments (DPIA) where required by GDPR Article 35.
DPIA Triggers for AI:
| Trigger | AI Context |
|---|---|
| Systematic profiling | AI-based profiling affecting individuals |
| Large-scale sensitive data | AI processing biometric, health data |
| Systematic monitoring | AI surveillance of public spaces |
| New technologies | Novel AI applications with uncertain impacts |
GDPR Controller Responsibilities
Where deploying AI that processes personal data:
- Identify legal basis for processing
- Implement data minimisation
- Ensure data subject rights can be exercised
- Conduct DPIA where required
- Document processing activities
- Implement appropriate security measures
Public Authority Registration (Article 26(8))
Deployers that are public authorities or EU institutions, bodies, offices, or agencies must:
- Comply with registration obligations under Article 49
- Verify the high-risk AI system is registered in the EU database (Article 71)
- Not use a system that is not registered in the EU database
Compliance Note
This obligation applies only to public-sector deployers, not private-sector deployers.
Informing Natural Persons (Article 26(11))
Deployers of Annex III high-risk AI systems that make or assist in decisions about natural persons must inform those persons that they are subject to the use of a high-risk AI system. This applies to all deployers, not just public authorities.
💡 Note: This is distinct from the worker information requirement in Article 26(7). Article 26(11) applies to any natural person affected by an AI-assisted decision, not just employees.
Post-Remote Biometric Identification (Article 26(10))
Deployers using high-risk AI for post-remote biometric identification in law enforcement must obtain:
- Prior authorisation from a judicial authority or independent administrative authority (or within 48 hours in urgent cases)
- If authorisation is rejected, all data and results must be immediately deleted
- Use must not be untargeted — must relate to a specifically targeted individual
- Each use must be documented in the relevant police file
- Annual reporting to national authorities is required
Authority Cooperation (Article 26(12))
Cooperation Obligations
Deployers must cooperate with market surveillance authorities including:
| Requirement | Deployer Action |
|---|---|
| Information provision | Provide requested information within timeframes |
| Access | Grant access to AI system, logs, documentation |
| Testing support | Facilitate technical testing if required |
| Corrective action | Implement required corrective measures |
Preparing for Authority Requests
Maintain readiness by:
- Designating authority liaison contact
- Establishing document retrieval procedures
- Testing access provision capabilities
- Training relevant staff on cooperation requirements
Compliance Checklist: Deployer Obligations
Pre-Deployment:
- Review instructions for use thoroughly
- Verify deployment aligns with intended purpose
- Assign human oversight personnel
- Train staff on AI system use
- Conduct FRIA (if required)
- Conduct DPIA (if required)
- Inform workers and representatives
Operational:
- Monitor AI performance per instructions
- Maintain log retention systems
- Track human oversight activities
- Address data quality issues
- Report incidents to providers
Ongoing:
- Review and update FRIA/DPIA periodically
- Refresh staff training
- Assess for substantial modifications
- Maintain authority cooperation readiness
What You Learned
Key concepts from this chapter
Deployers must use high-risk AI strictly according to provider instructions for use
Human oversight requires persons with competence, training, and authority to override AI
Automation bias awareness is an explicit requirement—train staff to recognise over-reliance risks
Log retention minimum is 6 months; longer may be required for other regulatory purposes
Workers must be informed before AI deployment in the workplace
Chapter Complete
High-Risk AI Compliance
10/14
chapters