Transparency and Information
Article 13 transparency requirements for deployers.
Transparency and Information (Article 13)
Learning Objectives
By the end of this chapter, you will be able to:
- Design AI systems with appropriate transparency for deployer interpretation
- Create comprehensive instructions for use meeting Article 13 requirements
- Communicate AI capabilities, limitations, and risks effectively
- Calibrate transparency level to intended purpose and user competence
- Integrate transparency with human oversight requirements
Article 13: Transparency Requirements
Intended Purpose
Clear description of what the AI does
Capabilities & Limitations
What it can and cannot do
Performance Metrics
Accuracy, error rates, known biases
Human Oversight Needs
Required level of human control
Known Risks
Identified risks and mitigations
Data Requirements
Training and input data specifications
Instructions for use must be comprehensive, understandable, and accessible to deployers
Article 13 establishes the transparency foundation for trustworthy AI. High-risk AI systems must be designed so deployers can understand, interpret, and appropriately use the system's outputs. Without transparency, human oversight becomes impossible.
The Transparency Imperative
Why Transparency Matters
| Purpose | Benefit |
|---|---|
| Interpretability | Deployers understand what AI outputs mean |
| Appropriate use | Deployers use AI within intended parameters |
| Risk awareness | Deployers know limitations and failure modes |
| Human oversight | Enables meaningful human control |
| Accountability | Clear understanding of AI behaviour |
Article 13 Requirements
Core Transparency Standard
High-risk AI systems must be designed and developed to ensure operation is sufficiently transparent to enable deployers to:
- Interpret the system's output appropriately
- Use it appropriately for its intended purpose
Transparency Calibration
The level of transparency must be appropriate to:
| Factor | Consideration |
|---|---|
| Intended purpose | Higher stakes = more transparency needed |
| User competence | Technical vs. non-technical users |
| State of the art | What transparency is technically feasible |
| Deployment context | Operational environment constraints |
Instructions for Use (Article 13(3))
Mandatory Content
Providers must supply deployers with instructions including:
| Element | Required Information |
|---|---|
| (a) Provider identity and contact | Name, address, contact details; authorised representative if applicable |
| (b) Characteristics, capabilities and limitations | Includes: (i) intended purpose, (ii) accuracy/robustness metrics, (iii) foreseeable misuse and risk circumstances, (iv) known limitations, (v) group-specific performance, (vi) input specifications, (vii) interpretability of outputs |
| (c) Pre-determined changes | Modifications since initial market placement |
| (d) Human oversight measures | Technical capabilities and measures for human oversight |
| (e) Computational/hardware resources and expected lifetime | Required resources, maintenance and care instructions |
| (f) Log collection mechanisms | Description of logging capabilities per Article 12 |
Writing Effective Instructions
Best Practices:
| Principle | Implementation |
|---|---|
| Audience-appropriate | Match user technical level |
| Actionable | Tell users what to DO |
| Complete | Cover all required elements |
| Clear | Avoid jargon where possible |
| Accessible | Easy to find and navigate |
| Updated | Reflect current system state |
Common Instruction Failures
| Failure | Problem |
|---|---|
| Generic templates | Don't reflect specific system |
| Missing limitations | Deployer doesn't know boundaries |
| Unclear metrics | Performance claims not verifiable |
| Buried warnings | Risk information not prominent |
| Outdated content | Doesn't match current system |
Communicating AI Capabilities and Limitations
Performance Disclosure
| Metric Type | How to Communicate |
|---|---|
| Accuracy | "System achieves X% accuracy on [specific task]" |
| Precision/Recall | "X% of positive predictions correct; Y% of actual positives detected" |
| Error rates | "False positive rate: X%; False negative rate: Y%" |
| Confidence ranges | "Outputs include confidence scores from 0-100%" |
Limitation Disclosure
Clearly communicate:
- Input limitations: What data types/quality the system can process
- Population limitations: Groups where performance may differ
- Environmental limitations: Conditions affecting performance
- Temporal limitations: When data currency affects outputs
- Failure modes: How the system fails and warning signs
Risk Communication
| Risk Type | Disclosure Approach |
|---|---|
| Safety risks | Prominent warnings with specific scenarios |
| Fundamental rights risks | Clear statement of potential impacts |
| Bias risks | Groups that may be disproportionately affected |
| Misuse risks | Explicit "do not use for" statements |
Expert Insight
Think of instructions like medical package inserts—comprehensive information for professional users who need to understand benefits, risks, contraindications, and proper use.
Transparency for Different User Types
Technical Users
Provide:
- Detailed performance metrics
- Model architecture information
- Training data characteristics
- Validation methodology
- API specifications
Business Users
Provide:
- Plain-language capability descriptions
- Use case guidance
- Decision support information
- Escalation procedures
- Limitation summaries
Affected Persons
For those subject to AI decisions:
- Notice of AI involvement (Article 50)
- Explanation of decision factors
- Challenge/appeal procedures
- Human contact options
Transparency Technologies
Explainability Methods
| Method | Application |
|---|---|
| Feature importance | Which inputs influenced output |
| Counterfactual explanations | "If X had been Y, outcome would change" |
| Local explanations | Why this specific decision was made |
| Model cards | Standardised system documentation |
| Datasheets | Training data documentation |
User Interface Considerations
| Element | Purpose |
|---|---|
| Confidence displays | Show certainty of outputs |
| Explanation interfaces | Provide decision reasoning |
| Warning indicators | Flag low-confidence or edge cases |
| Override options | Enable human intervention |
Integration with Other Requirements
| Requirement | Transparency Connection |
|---|---|
| Human Oversight (Art. 14) | Transparency enables oversight |
| Technical Documentation (Art. 11) | Instructions are documentation component |
| Accuracy (Art. 15) | Accuracy must be transparently declared |
| Article 50 | Notification obligations for affected persons |
Transparency Compliance Checklist
Instructions for Use:
- Provider identity and contact included
- Intended purpose clearly stated
- Performance metrics disclosed
- Limitations explicitly stated
- Foreseeable misuse scenarios covered
- Risk circumstances identified
- Human oversight capabilities explained
- Input specifications provided
- Interpretability guidance included
System Design:
- Outputs interpretable by deployers
- Confidence/uncertainty communicated
- Warning mechanisms for edge cases
- Explanation capabilities where appropriate
What You Learned
Key concepts from this chapter
Transparency must enable deployers to **interpret outputs appropriately**
**Instructions for use are mandatory** and must cover specific required elements
Include **capabilities, limitations, and risks**—not just positive attributes
Calibrate transparency level to **intended purpose and user competence**
Transparency is the **foundation for meaningful human oversight**