aicomply.
Lesson15 minChapter 12 of 14

Transparency Obligations Deep-Dive (Article 50)

Comprehensive guide to transparency obligations for AI systems interacting with persons, generating synthetic content, and performing emotion recognition or biometric categorisation.

Learning Objectives

By the end of this chapter, you will be able to:

  • Understand the four core transparency obligations under Article 50(1)-(4), their implementation requirements under Article 50(5), and the supporting framework under 50(6)-(7)
  • Implement AI interaction disclosure for chatbots and virtual assistants
  • Design machine-readable marking systems for synthetic content
  • Navigate the deepfake disclosure requirements and their exceptions
  • Understand the enforcement timeline for transparency obligations

Article 50 establishes transparency obligations for AI systems that interact with persons, generate synthetic content, or perform certain types of recognition. These apply to systems across all risk levels — including systems that are not classified as high-risk.

Enforcement Date: Transparency obligations under Article 50 apply from 2 August 2026 (as part of the general application date for the AI Act).

Overview of Article 50 Obligations

ParagraphObligationApplies ToResponsible Party
50(1)AI interaction disclosureAI systems interacting with natural personsProvider
50(2)Machine-readable markingAI systems generating synthetic contentProvider
50(3)Emotion recognition disclosureEmotion recognition / biometric categorisation systemsDeployer
50(4) para 1Deepfake disclosureAI-generated image, audio, video ("deep fakes")Deployer
50(4) para 2AI-generated text disclosureAI-generated text on matters of public interestDeployer
50(5)Accessibility requirementsAll transparency disclosuresProvider & Deployer
50(7)Codes of practiceDetection and labelling of synthetic contentAI Office coordination

1. AI Interaction Disclosure (Article 50(1))

The Obligation

Providers must ensure that AI systems intended to directly interact with natural persons are designed and developed so that the persons concerned are informed that they are interacting with an AI system.

Implementation Requirements

ElementRequirement
TimingAt the latest at the start of the first interaction
ClarityIn a clear and distinguishable manner
FormatAppropriate to the context of use

Practical Implementation

For chatbots and virtual assistants:

ApproachExample
Opening statement"You are interacting with an AI assistant. A human agent is available upon request."
Persistent indicatorVisual badge or icon indicating AI throughout the conversation
System identificationClearly naming the AI system in the interface

For voice-based AI:

ApproachExample
Audio announcement"This call is being handled by an AI system."
Periodic reminderBrief audio cue at regular intervals for longer interactions

Exceptions

Article 50(1) provides exceptions where the AI interaction is obvious from the circumstances and the context of use, considering the perspective of a reasonably well-informed, observant, and circumspect natural person. This exception also applies to AI systems authorised by law to detect, prevent, investigate, or prosecute criminal offences (subject to safeguards).

Expert Insight

The "obvious from circumstances" exception is narrow. Unless your AI system is clearly and unmistakably perceived as artificial (e.g., a robot with clearly non-human appearance), you should default to providing disclosure.


2. Machine-Readable Marking of Synthetic Content (Article 50(2))

The Obligation

Providers of AI systems that generate synthetic audio, image, video, or text content must ensure the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated.

Technical Implementation

TechnologyApplicationStandard
WatermarkingEmbed invisible markers in images/video/audioC2PA (Coalition for Content Provenance and Authenticity)
Metadata taggingAttach provenance metadata to outputsIPTC metadata standards
Cryptographic signaturesSign content with provenance certificatesContent Credentials
FingerprintingCreate detectable patterns in generated contentProvider-specific methods

Requirements for Technical Solutions

The marking must be:

CriterionRequirement
EffectiveTechnically functional for detection
RobustAs far as technically feasible, resistant to removal or alteration
InteroperableCompatible with detection tools
ProportionateAppropriate to the capabilities and limitations of the AI system

⚠️ Note: The AI Office, in cooperation with the AI Board, is expected to encourage and facilitate the drawing up of codes of practice regarding detection and labelling of synthetically generated or manipulated content (Article 50(7)).


3. Emotion Recognition and Biometric Categorisation Disclosure (Article 50(3))

The Obligation

Deployers of emotion recognition systems or biometric categorisation systems must inform natural persons exposed to the system of its operation and process personal data in accordance with the GDPR and other applicable law.

Implementation Requirements

ElementRequirement
Who to informNatural persons exposed to the system
What to discloseThe operation of the system (that it is running and what it does)
Data protectionGDPR compliance, including legal basis for processing biometric data

Practical Examples

SystemRequired Disclosure
Customer emotion analysisInform customers their emotions are being analysed (e.g., signage, terms of use)
Biometric categorisation at entry pointsClear signage indicating biometric categorisation is in operation
Retail sentiment analysisIn-store notices and privacy policy disclosure

Exceptions

This obligation does not apply to AI systems used for biometric categorisation or emotion recognition that are permitted by law for purposes of detecting, preventing, or investigating criminal offences (subject to appropriate safeguards for rights and freedoms).

⚠️ Reminder: Emotion recognition in the workplace and educational institutions is separately prohibited under Article 5(1)(f) (except for medical or safety reasons). Article 50(3) covers emotion recognition in other contexts where it is permitted.


4. Deepfake Disclosure (Article 50(4), paragraph 1)

The Obligation

Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated.

What Constitutes a "Deep Fake"?

Article 3(60): AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful.

Disclosure Requirements

ElementRequirement
FormatClearly and distinguishably labelled
VisibilityDiscernible at the point of first display or playing
MethodMay include watermarking, text labels, or metadata

Exceptions for Artistic and Satirical Content

Article 50(4) provides important exceptions:

ExceptionConditions
Artistic, satirical, or fictional contentDisclosure may be limited to acknowledging artificial generation in a manner that does not hamper the display or enjoyment of the work
Exercise of fundamental rightsWhere content is part of creative, artistic, satirical, or fictional expression (freedom of expression and the arts under the Charter)
Manifestly artistic or creativeWhere it is manifestly part of an artistic, creative, satirical, fictional, or analogous work or programme

💡 Key Point: Even artistic/satirical deepfakes must still be disclosed — but the disclosure can be less prominent (e.g., in credits, metadata, or an accompanying note) rather than an intrusive label that ruins the artistic experience.


5. AI-Generated Text for Public Interest (Article 50(4), paragraph 2)

The Obligation

Where AI-generated text is published for the purpose of informing the public on matters of public interest, it must be disclosed that it has been artificially generated. This applies to the deployer (or in some cases the provider).

Scope

CoveredNot Covered
News articles generated by AIInternal company documents
Public policy analysis produced by AIPrivate correspondence
AI-generated government communicationsMarketing copy (unless public interest topic)
AI-produced scientific summaries for publicFiction and entertainment

Exception

This obligation does not apply where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication.

💡 Practical Impact: A news organisation using AI to draft articles that are then reviewed and edited by a human editor would be exempt from this specific disclosure requirement — provided the human editor takes editorial responsibility. Pure AI-generated content published without human editorial control must be labelled.


6. Accessibility Requirements (Article 50(5))

The Obligation

The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements.

Implementation

Accessibility StandardApplication
EN 301 549European standard for digital accessibility
WCAG 2.1 AAWeb content accessibility guidelines
Perceivable alternativesVisual disclosures must have audio alternatives; audio must have visual
Plain languageDisclosures must be understandable, not just technically present

7. Codes of Practice for Detection and Labelling (Article 50(7))

The Framework

The AI Office, in cooperation with the AI Board, is to:

ActivityPurpose
Encourage codes of practiceFacilitate industry-led detection and labelling standards
Include stakeholdersInvolve providers, deployers, civil society, academia
Promote interoperabilityEnsure detection tools can work across different AI systems
Regular reviewMonitor effectiveness and update as technology evolves

Current Status

Codes of practice are in development. Organisations should:

  • Monitor AI Office publications for emerging standards
  • Participate in industry standardisation efforts
  • Implement available technical solutions (C2PA, watermarking) proactively
  • Document transparency measures as evidence of good faith compliance

Compliance Checklist: Article 50

AI Interaction (50(1)):

  • All person-facing AI systems identified
  • Disclosure mechanisms designed (text, audio, visual)
  • Disclosure timing verified (before or at start of interaction)
  • Accessibility of disclosures confirmed

Synthetic Content Marking (50(2)):

  • AI content generation systems identified
  • Machine-readable marking implemented (watermarks, metadata)
  • Robustness of marking tested
  • Interoperability with detection tools verified

Emotion/Biometric Disclosure (50(3)):

  • Emotion recognition and biometric categorisation systems identified
  • Disclosure mechanisms deployed (signage, digital notice)
  • GDPR compliance verified for biometric data processing

Deepfake Disclosure (50(4)):

  • Deepfake generation capabilities identified
  • Labelling mechanisms implemented
  • Artistic/satirical exception criteria documented where applied

AI Text Disclosure (50(4) para 2):

  • AI-generated public interest text identified
  • Disclosure labels applied OR human editorial control documented

Accessibility (50(5)):

  • All disclosures tested for accessibility compliance
  • Alternative formats provided (visual ↔ audio)

What You Learned

Key concepts from this chapter

Article 50 applies **across all risk levels** — not just to high-risk AI systems

**Seven distinct obligations** cover AI interaction, synthetic content, emotion recognition, deepfakes, AI text, accessibility, and codes of practice

**Machine-readable marking** of synthetic content requires technical implementation (watermarking, metadata)

**Deepfake disclosure** has important exceptions for artistic, satirical, and fictional content

**AI-generated text** for public interest must be labelled unless a human editor takes editorial responsibility