AI Literacy (Article 4)
Understanding the universal AI literacy obligation — the first AI Act requirement already in force.
Learning Objectives
By the end of this chapter, you will be able to:
- Explain the AI literacy obligation and its legal basis
- Identify who must ensure AI literacy and who must receive it
- Understand the two dimensions of AI literacy (informed deployment + risk awareness)
- Design a practical AI literacy programme for your organisation
- Demonstrate compliance with this already-enforceable obligation
❗ CRITICAL: ALREADY IN FORCE. The AI literacy obligation under Article 4 has been enforceable since 2 February 2025. This is the earliest substantive obligation under the AI Act, alongside the prohibited practices ban. Your organisation must already be complying.
The Legal Obligation
Article 4 — Full Text
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Key Elements Unpacked
| Element | Meaning |
|---|---|
| "Providers and deployers" | Both sides of the AI value chain bear this obligation |
| "shall take measures" | Mandatory — not aspirational or optional |
| "to their best extent" | Proportionality — effort must be reasonable but genuine |
| "sufficient level" | Context-dependent — what is sufficient varies by role |
| "staff and other persons" | Covers employees AND contractors, consultants, third parties |
| "dealing with the operation and use" | Anyone involved in operating, using, or overseeing AI |
| "on their behalf" | Includes outsourced AI operations |
| "taking into account" | Calibrate training to the person's existing knowledge and role |
| "considering the persons or groups" | Consider the end-users/affected persons, not just staff |
Definition of AI Literacy (Article 3(56))
'AI literacy' means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
The Two Dimensions
1. Informed Deployment — Staff must have the skills, knowledge, and understanding to:
- Understand how the AI system works
- Understand how it is intended to be used
- Correctly interpret the AI system's output
- Make informed decisions about AI deployment
2. Risk Awareness — Staff must be able to:
- Gain awareness about the opportunities and risks of AI
- Understand possible harm AI can cause
- Know where and how to access further information about risks
Expert Insight
The second dimension means staff don't need to memorise every possible risk — but they must know that risks exist and where to find more information. This is a "know-how-to-learn" standard, not a "know-everything" standard.
Who Must Ensure AI Literacy?
| Responsible Party | Obligation | Scope |
|---|---|---|
| Providers | Ensure literacy of staff and persons acting on their behalf | Development, testing, deployment teams; support staff |
| Deployers | Ensure literacy of staff and persons acting on their behalf | Operators, oversight personnel, decision-makers |
Who Must Receive AI Literacy Training?
| Group | Examples | Literacy Level |
|---|---|---|
| Direct AI operators | Engineers, data scientists, system administrators | Deep technical understanding |
| AI oversight personnel | Human-in-the-loop staff, compliance officers | Operational + compliance understanding |
| Decision-makers | Executives, managers using AI-informed decisions | Strategic + risk understanding |
| Support staff | Customer service, HR using AI tools | Practical usage + limitation awareness |
| Third parties | Contractors, vendors operating AI on your behalf | Role-appropriate understanding |
Third-Party Obligations
If you engage another company to operate AI on your behalf, you must ensure their staff also have sufficient AI literacy. Practical approaches:
- Impose contractual obligations requiring AI literacy training
- Provide training materials or access to your training programme
- Require certification or competency evidence
- Include AI literacy requirements in procurement criteria
Legislative Context (Recital 20)
Recital 20 elaborates on the purpose of AI literacy:
AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems.
The recital emphasises that AI literacy notions may include:
- Understanding the correct application of technical elements during development
- The measures to be applied during AI system use
- The suitable ways to interpret the AI system's output
- Understanding how AI-assisted decisions will impact affected persons
Supporting Mechanisms
| Mechanism | Reference | Purpose |
|---|---|---|
| AI Board | Recital 20; Article 66(f) | Support the Commission in promoting AI literacy tools |
| Voluntary Codes of Conduct | Recital 20; Article 95(2)(c) | Advance and promote AI literacy as frameworks for best practices |
| Commission & Member States | Recital 20 | Facilitate drawing up voluntary codes to advance AI literacy |
Designing an AI Literacy Programme
Step 1: Assess Current State
| Assessment Area | Questions |
|---|---|
| Staff inventory | Who interacts with AI systems? In what capacity? |
| Existing knowledge | What technical knowledge, experience, and education do they have? |
| Context mapping | What AI systems are used? In what contexts? |
| Affected persons | Who are the persons on whom the AI systems are used? |
| Gap analysis | Where are the knowledge gaps relative to required literacy levels? |
Step 2: Define Role-Based Competencies
| Role Category | Required Competencies |
|---|---|
| Technical staff | System architecture, data governance, risk assessment, testing methodologies, bias detection |
| Oversight personnel | Output interpretation, override procedures, automation bias awareness, incident recognition |
| Business users | Intended purpose, limitations, when to escalate, data quality requirements |
| Leadership | Strategic risk, compliance obligations, governance responsibilities, resource allocation |
| All staff | What AI is, basic opportunities and risks, organisational AI policy, reporting channels |
Step 3: Deliver Training
| Delivery Method | Best For | Frequency |
|---|---|---|
| Formal training sessions | New staff onboarding, role changes | At onboarding; annual refresh |
| Workshops | Deep dives on specific AI systems | Per new AI deployment |
| E-learning modules | Broad awareness, flexible scheduling | Ongoing, self-paced |
| Hands-on exercises | Technical and operational staff | Quarterly |
| Briefings and updates | Leadership, regulatory changes | As needed |
Step 4: Assess and Document
| Activity | Purpose | Evidence |
|---|---|---|
| Knowledge assessments | Verify understanding | Test scores, completion records |
| Competency evaluations | Verify practical ability | Performance reviews, exercises |
| Training records | Demonstrate compliance | Attendance logs, certificates |
| Programme reviews | Ensure effectiveness | Assessment results, feedback |
Enforcement and Penalties
💡 Notable Gap: As Bird & Bird observe, there appear to be no specific penalties for failure to comply with the AI literacy obligations at Article 4 — unlike other provisions of the Act. However, this does not diminish the obligation, and failure to ensure AI literacy could:
- Be considered an aggravating factor in other violations
- Contribute to liability if harm results from uninformed AI use
- Undermine compliance with related obligations (e.g., human oversight under Article 14 requires competent staff)
Compliance Timeline
| Date | Status |
|---|---|
| 2 February 2025 | Article 4 is enforceable — organisations must already be complying |
| Ongoing | AI Board and Commission to promote AI literacy tools and voluntary codes |
Compliance Checklist
- Identified all staff and third parties dealing with AI operations
- Assessed existing knowledge levels per role
- Defined role-based AI literacy competency requirements
- Developed or procured training programmes
- Delivered initial training to all relevant persons
- Established assessment mechanisms
- Imposed contractual AI literacy obligations on third parties
- Documented all training activities and records
- Established periodic refresh and update schedule
- Considered affected persons' need for AI literacy
What You Learned
Key concepts from this chapter
Article 4 AI literacy is **already in force** since 2 February 2025 — the earliest AI Act obligation
Both **providers and deployers** must ensure AI literacy for their staff and third parties
AI literacy has **two dimensions**: informed deployment capability + risk awareness
Training must be **calibrated** to each person's role, knowledge, and context
**Third parties** operating AI on your behalf must also be covered