aicomply.
Lesson10 minChapter 5 of 14

Transparency and Information

Article 13 transparency requirements for deployers.

Transparency and Information (Article 13)

Learning Objectives

By the end of this chapter, you will be able to:

  • Design AI systems with appropriate transparency for deployer interpretation
  • Create comprehensive instructions for use meeting Article 13 requirements
  • Communicate AI capabilities, limitations, and risks effectively
  • Calibrate transparency level to intended purpose and user competence
  • Integrate transparency with human oversight requirements

Article 13: Transparency Requirements

Intended Purpose

Clear description of what the AI does

Capabilities & Limitations

What it can and cannot do

Performance Metrics

Accuracy, error rates, known biases

Human Oversight Needs

Required level of human control

Known Risks

Identified risks and mitigations

Data Requirements

Training and input data specifications

Instructions for use must be comprehensive, understandable, and accessible to deployers


Article 13 establishes the transparency foundation for trustworthy AI. High-risk AI systems must be designed so deployers can understand, interpret, and appropriately use the system's outputs. Without transparency, human oversight becomes impossible.

The Transparency Imperative

Why Transparency Matters

PurposeBenefit
InterpretabilityDeployers understand what AI outputs mean
Appropriate useDeployers use AI within intended parameters
Risk awarenessDeployers know limitations and failure modes
Human oversightEnables meaningful human control
AccountabilityClear understanding of AI behaviour

Article 13 Requirements

Core Transparency Standard

High-risk AI systems must be designed and developed to ensure operation is sufficiently transparent to enable deployers to:

  1. Interpret the system's output appropriately
  2. Use it appropriately for its intended purpose

Transparency Calibration

The level of transparency must be appropriate to:

FactorConsideration
Intended purposeHigher stakes = more transparency needed
User competenceTechnical vs. non-technical users
State of the artWhat transparency is technically feasible
Deployment contextOperational environment constraints

Instructions for Use (Article 13(3))

Mandatory Content

Providers must supply deployers with instructions including:

ElementRequired Information
(a) Provider identity and contactName, address, contact details; authorised representative if applicable
(b) Characteristics, capabilities and limitationsIncludes: (i) intended purpose, (ii) accuracy/robustness metrics, (iii) foreseeable misuse and risk circumstances, (iv) known limitations, (v) group-specific performance, (vi) input specifications, (vii) interpretability of outputs
(c) Pre-determined changesModifications since initial market placement
(d) Human oversight measuresTechnical capabilities and measures for human oversight
(e) Computational/hardware resources and expected lifetimeRequired resources, maintenance and care instructions
(f) Log collection mechanismsDescription of logging capabilities per Article 12

Writing Effective Instructions

Best Practices:

PrincipleImplementation
Audience-appropriateMatch user technical level
ActionableTell users what to DO
CompleteCover all required elements
ClearAvoid jargon where possible
AccessibleEasy to find and navigate
UpdatedReflect current system state

Common Instruction Failures

FailureProblem
Generic templatesDon't reflect specific system
Missing limitationsDeployer doesn't know boundaries
Unclear metricsPerformance claims not verifiable
Buried warningsRisk information not prominent
Outdated contentDoesn't match current system

Communicating AI Capabilities and Limitations

Performance Disclosure

Metric TypeHow to Communicate
Accuracy"System achieves X% accuracy on [specific task]"
Precision/Recall"X% of positive predictions correct; Y% of actual positives detected"
Error rates"False positive rate: X%; False negative rate: Y%"
Confidence ranges"Outputs include confidence scores from 0-100%"

Limitation Disclosure

Clearly communicate:

  • Input limitations: What data types/quality the system can process
  • Population limitations: Groups where performance may differ
  • Environmental limitations: Conditions affecting performance
  • Temporal limitations: When data currency affects outputs
  • Failure modes: How the system fails and warning signs

Risk Communication

Risk TypeDisclosure Approach
Safety risksProminent warnings with specific scenarios
Fundamental rights risksClear statement of potential impacts
Bias risksGroups that may be disproportionately affected
Misuse risksExplicit "do not use for" statements

Expert Insight

Think of instructions like medical package inserts—comprehensive information for professional users who need to understand benefits, risks, contraindications, and proper use.


Transparency for Different User Types

Technical Users

Provide:

  • Detailed performance metrics
  • Model architecture information
  • Training data characteristics
  • Validation methodology
  • API specifications

Business Users

Provide:

  • Plain-language capability descriptions
  • Use case guidance
  • Decision support information
  • Escalation procedures
  • Limitation summaries

Affected Persons

For those subject to AI decisions:

  • Notice of AI involvement (Article 50)
  • Explanation of decision factors
  • Challenge/appeal procedures
  • Human contact options

Transparency Technologies

Explainability Methods

MethodApplication
Feature importanceWhich inputs influenced output
Counterfactual explanations"If X had been Y, outcome would change"
Local explanationsWhy this specific decision was made
Model cardsStandardised system documentation
DatasheetsTraining data documentation

User Interface Considerations

ElementPurpose
Confidence displaysShow certainty of outputs
Explanation interfacesProvide decision reasoning
Warning indicatorsFlag low-confidence or edge cases
Override optionsEnable human intervention

Integration with Other Requirements

RequirementTransparency Connection
Human Oversight (Art. 14)Transparency enables oversight
Technical Documentation (Art. 11)Instructions are documentation component
Accuracy (Art. 15)Accuracy must be transparently declared
Article 50Notification obligations for affected persons

Transparency Compliance Checklist

Instructions for Use:

  • Provider identity and contact included
  • Intended purpose clearly stated
  • Performance metrics disclosed
  • Limitations explicitly stated
  • Foreseeable misuse scenarios covered
  • Risk circumstances identified
  • Human oversight capabilities explained
  • Input specifications provided
  • Interpretability guidance included

System Design:

  • Outputs interpretable by deployers
  • Confidence/uncertainty communicated
  • Warning mechanisms for edge cases
  • Explanation capabilities where appropriate

What You Learned

Key concepts from this chapter

Transparency must enable deployers to **interpret outputs appropriately**

**Instructions for use are mandatory** and must cover specific required elements

Include **capabilities, limitations, and risks**—not just positive attributes

Calibrate transparency level to **intended purpose and user competence**

Transparency is the **foundation for meaningful human oversight**

Chapter Complete

High-Risk AI Compliance

5/14

chapters