Transparency Obligations Deep-Dive (Article 50)
Comprehensive guide to transparency obligations for AI systems interacting with persons, generating synthetic content, and performing emotion recognition or biometric categorisation.
Learning Objectives
By the end of this chapter, you will be able to:
- Understand the four core transparency obligations under Article 50(1)-(4), their implementation requirements under Article 50(5), and the supporting framework under 50(6)-(7)
- Implement AI interaction disclosure for chatbots and virtual assistants
- Design machine-readable marking systems for synthetic content
- Navigate the deepfake disclosure requirements and their exceptions
- Understand the enforcement timeline for transparency obligations
Article 50 establishes transparency obligations for AI systems that interact with persons, generate synthetic content, or perform certain types of recognition. These apply to systems across all risk levels — including systems that are not classified as high-risk.
❗ Enforcement Date: Transparency obligations under Article 50 apply from 2 August 2026 (as part of the general application date for the AI Act).
Overview of Article 50 Obligations
| Paragraph | Obligation | Applies To | Responsible Party |
|---|---|---|---|
| 50(1) | AI interaction disclosure | AI systems interacting with natural persons | Provider |
| 50(2) | Machine-readable marking | AI systems generating synthetic content | Provider |
| 50(3) | Emotion recognition disclosure | Emotion recognition / biometric categorisation systems | Deployer |
| 50(4) para 1 | Deepfake disclosure | AI-generated image, audio, video ("deep fakes") | Deployer |
| 50(4) para 2 | AI-generated text disclosure | AI-generated text on matters of public interest | Deployer |
| 50(5) | Accessibility requirements | All transparency disclosures | Provider & Deployer |
| 50(7) | Codes of practice | Detection and labelling of synthetic content | AI Office coordination |
1. AI Interaction Disclosure (Article 50(1))
The Obligation
Providers must ensure that AI systems intended to directly interact with natural persons are designed and developed so that the persons concerned are informed that they are interacting with an AI system.
Implementation Requirements
| Element | Requirement |
|---|---|
| Timing | At the latest at the start of the first interaction |
| Clarity | In a clear and distinguishable manner |
| Format | Appropriate to the context of use |
Practical Implementation
For chatbots and virtual assistants:
| Approach | Example |
|---|---|
| Opening statement | "You are interacting with an AI assistant. A human agent is available upon request." |
| Persistent indicator | Visual badge or icon indicating AI throughout the conversation |
| System identification | Clearly naming the AI system in the interface |
For voice-based AI:
| Approach | Example |
|---|---|
| Audio announcement | "This call is being handled by an AI system." |
| Periodic reminder | Brief audio cue at regular intervals for longer interactions |
Exceptions
Article 50(1) provides exceptions where the AI interaction is obvious from the circumstances and the context of use, considering the perspective of a reasonably well-informed, observant, and circumspect natural person. This exception also applies to AI systems authorised by law to detect, prevent, investigate, or prosecute criminal offences (subject to safeguards).
Expert Insight
The "obvious from circumstances" exception is narrow. Unless your AI system is clearly and unmistakably perceived as artificial (e.g., a robot with clearly non-human appearance), you should default to providing disclosure.
2. Machine-Readable Marking of Synthetic Content (Article 50(2))
The Obligation
Providers of AI systems that generate synthetic audio, image, video, or text content must ensure the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated.
Technical Implementation
| Technology | Application | Standard |
|---|---|---|
| Watermarking | Embed invisible markers in images/video/audio | C2PA (Coalition for Content Provenance and Authenticity) |
| Metadata tagging | Attach provenance metadata to outputs | IPTC metadata standards |
| Cryptographic signatures | Sign content with provenance certificates | Content Credentials |
| Fingerprinting | Create detectable patterns in generated content | Provider-specific methods |
Requirements for Technical Solutions
The marking must be:
| Criterion | Requirement |
|---|---|
| Effective | Technically functional for detection |
| Robust | As far as technically feasible, resistant to removal or alteration |
| Interoperable | Compatible with detection tools |
| Proportionate | Appropriate to the capabilities and limitations of the AI system |
⚠️ Note: The AI Office, in cooperation with the AI Board, is expected to encourage and facilitate the drawing up of codes of practice regarding detection and labelling of synthetically generated or manipulated content (Article 50(7)).
3. Emotion Recognition and Biometric Categorisation Disclosure (Article 50(3))
The Obligation
Deployers of emotion recognition systems or biometric categorisation systems must inform natural persons exposed to the system of its operation and process personal data in accordance with the GDPR and other applicable law.
Implementation Requirements
| Element | Requirement |
|---|---|
| Who to inform | Natural persons exposed to the system |
| What to disclose | The operation of the system (that it is running and what it does) |
| Data protection | GDPR compliance, including legal basis for processing biometric data |
Practical Examples
| System | Required Disclosure |
|---|---|
| Customer emotion analysis | Inform customers their emotions are being analysed (e.g., signage, terms of use) |
| Biometric categorisation at entry points | Clear signage indicating biometric categorisation is in operation |
| Retail sentiment analysis | In-store notices and privacy policy disclosure |
Exceptions
This obligation does not apply to AI systems used for biometric categorisation or emotion recognition that are permitted by law for purposes of detecting, preventing, or investigating criminal offences (subject to appropriate safeguards for rights and freedoms).
⚠️ Reminder: Emotion recognition in the workplace and educational institutions is separately prohibited under Article 5(1)(f) (except for medical or safety reasons). Article 50(3) covers emotion recognition in other contexts where it is permitted.
4. Deepfake Disclosure (Article 50(4), paragraph 1)
The Obligation
Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated.
What Constitutes a "Deep Fake"?
Article 3(60): AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful.
Disclosure Requirements
| Element | Requirement |
|---|---|
| Format | Clearly and distinguishably labelled |
| Visibility | Discernible at the point of first display or playing |
| Method | May include watermarking, text labels, or metadata |
Exceptions for Artistic and Satirical Content
Article 50(4) provides important exceptions:
| Exception | Conditions |
|---|---|
| Artistic, satirical, or fictional content | Disclosure may be limited to acknowledging artificial generation in a manner that does not hamper the display or enjoyment of the work |
| Exercise of fundamental rights | Where content is part of creative, artistic, satirical, or fictional expression (freedom of expression and the arts under the Charter) |
| Manifestly artistic or creative | Where it is manifestly part of an artistic, creative, satirical, fictional, or analogous work or programme |
💡 Key Point: Even artistic/satirical deepfakes must still be disclosed — but the disclosure can be less prominent (e.g., in credits, metadata, or an accompanying note) rather than an intrusive label that ruins the artistic experience.
5. AI-Generated Text for Public Interest (Article 50(4), paragraph 2)
The Obligation
Where AI-generated text is published for the purpose of informing the public on matters of public interest, it must be disclosed that it has been artificially generated. This applies to the deployer (or in some cases the provider).
Scope
| Covered | Not Covered |
|---|---|
| News articles generated by AI | Internal company documents |
| Public policy analysis produced by AI | Private correspondence |
| AI-generated government communications | Marketing copy (unless public interest topic) |
| AI-produced scientific summaries for public | Fiction and entertainment |
Exception
This obligation does not apply where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication.
💡 Practical Impact: A news organisation using AI to draft articles that are then reviewed and edited by a human editor would be exempt from this specific disclosure requirement — provided the human editor takes editorial responsibility. Pure AI-generated content published without human editorial control must be labelled.
6. Accessibility Requirements (Article 50(5))
The Obligation
The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements.
Implementation
| Accessibility Standard | Application |
|---|---|
| EN 301 549 | European standard for digital accessibility |
| WCAG 2.1 AA | Web content accessibility guidelines |
| Perceivable alternatives | Visual disclosures must have audio alternatives; audio must have visual |
| Plain language | Disclosures must be understandable, not just technically present |
7. Codes of Practice for Detection and Labelling (Article 50(7))
The Framework
The AI Office, in cooperation with the AI Board, is to:
| Activity | Purpose |
|---|---|
| Encourage codes of practice | Facilitate industry-led detection and labelling standards |
| Include stakeholders | Involve providers, deployers, civil society, academia |
| Promote interoperability | Ensure detection tools can work across different AI systems |
| Regular review | Monitor effectiveness and update as technology evolves |
Current Status
Codes of practice are in development. Organisations should:
- Monitor AI Office publications for emerging standards
- Participate in industry standardisation efforts
- Implement available technical solutions (C2PA, watermarking) proactively
- Document transparency measures as evidence of good faith compliance
Compliance Checklist: Article 50
AI Interaction (50(1)):
- All person-facing AI systems identified
- Disclosure mechanisms designed (text, audio, visual)
- Disclosure timing verified (before or at start of interaction)
- Accessibility of disclosures confirmed
Synthetic Content Marking (50(2)):
- AI content generation systems identified
- Machine-readable marking implemented (watermarks, metadata)
- Robustness of marking tested
- Interoperability with detection tools verified
Emotion/Biometric Disclosure (50(3)):
- Emotion recognition and biometric categorisation systems identified
- Disclosure mechanisms deployed (signage, digital notice)
- GDPR compliance verified for biometric data processing
Deepfake Disclosure (50(4)):
- Deepfake generation capabilities identified
- Labelling mechanisms implemented
- Artistic/satirical exception criteria documented where applied
AI Text Disclosure (50(4) para 2):
- AI-generated public interest text identified
- Disclosure labels applied OR human editorial control documented
Accessibility (50(5)):
- All disclosures tested for accessibility compliance
- Alternative formats provided (visual ↔ audio)
What You Learned
Key concepts from this chapter
Article 50 applies **across all risk levels** — not just to high-risk AI systems
**Seven distinct obligations** cover AI interaction, synthetic content, emotion recognition, deepfakes, AI text, accessibility, and codes of practice
**Machine-readable marking** of synthetic content requires technical implementation (watermarking, metadata)
**Deepfake disclosure** has important exceptions for artistic, satirical, and fictional content
**AI-generated text** for public interest must be labelled unless a human editor takes editorial responsibility
Chapter Complete
High-Risk AI Compliance
12/14
chapters