Transparency Requirements Checklist
Checklist for Article 13 and Article 50 transparency requirements.
20 min
Read Time
Level
Checklist Progress
6 of 94 items completed
This checklist ensures that AI systems meet the transparency requirements under the EU AI Act, including Article 13 (high-risk AI) and Article 50 (AI systems interacting with natural persons, generating content, etc.). Use this checklist to verify transparency compliance before deployment and during operation.
- Complete this checklist for all AI systems requiring transparency measures
- Mark each requirement as Met, Partial, Not Met, or Not Applicable
- Document evidence for compliance
- Address all gaps before deployment
- Review annually or upon significant system changes
1.1.1Is this a high-risk AI system?
Not Started1.1.2Does the AI system interact directly with natural persons?
Not Started1.1.3Does the AI system perform emotion recognition?
Not Started1.1.4Does the AI system perform biometric categorization?
Not Started1.1.5Does the AI system generate or manipulate synthetic content (deepfakes, text, audio, video)?
Not Started1.1.6Is the AI system a GPAI model?
Not Started2.1.1System designed to enable deployers to interpret output
Partial2.1.2System designed to enable deployers to use output appropriately
Partial2.1.3Transparency measures proportionate to intended purpose
Partial2.1.4Output understandable to target users
Partial2.2.1Instructions for use provided
Partial2.2.2Instructions in appropriate format and language
Partial2.2.3Instructions accessible and understandable
Partial2.3.1Provider identity (name, registered trade name)
Not Started2.3.2Contact details of provider
Not Started2.3.3Authorised representative identity (if applicable)
N/A2.4.1AI system characteristics and capabilities
Not Started2.4.2AI system intended purpose
Not Started2.4.3Level of accuracy and relevant metrics
Not Started2.4.4Level of robustness
Not Started2.4.5Level of cybersecurity
Not Started2.4.6Known circumstances that may impact performance
Not Started2.4.7Technical capabilities and limitations
Not Started2.4.8System limitations
Not Started2.5.1Performance metrics for intended purpose
Not Started2.5.2Performance levels for persons/groups affected
Not Started2.5.3Specifications for input data
Not Started2.5.4Any pre-determined changes and their impact
N/A2.6.1Human oversight measures described
Not Started2.6.2Technical measures for oversight documented
Not Started2.6.3Human competences required documented
Not Started2.6.4Intervention/override instructions
Not Started2.7.1Expected lifetime of AI system
Not Started2.7.2Maintenance and care measures
Not Started2.7.3Update installation information
Not Started3.1.1Natural persons informed they are interacting with AI
Partial3.1.2Disclosure provided in clear and distinguishable manner
Partial3.1.3Disclosure provided at first interaction
Partial3.1.4Disclosure in language understandable to user
Partial3.2.1Clear statement that user is interacting with AI
Not Started3.2.2Disclosure visible/audible before interaction
Not Started3.2.3Disclosure cannot be easily missed
Not Started3.3.1AI obvious from circumstances
Not Started3.3.2Authorised by law for crime prevention/detection
N/A4.1.1Natural persons informed of emotion recognition
Partial4.1.2Disclosure provided prior to processing
Partial4.1.3Disclosure in clear and distinguishable manner
Partial4.1.4Categories of emotions detected disclosed
Partial4.2.1Disclosure mechanism implemented
Not Started4.2.2Disclosure documented
Not Started4.2.3Disclosure tested for clarity
Not Started5.1.1Natural persons informed of biometric categorization
Partial5.1.2Disclosure provided prior to processing
Partial5.1.3Disclosure in clear and distinguishable manner
Partial5.1.4Categories of attributes detected disclosed
Partial5.2.1Authorised by law for crime prevention/detection
N/A6.1.1Synthetic audio
Not Started6.1.2Synthetic image
Not Started6.1.3Synthetic video
Not Started6.1.4Synthetic text (published for public information)
Not Started6.1.5Deep fakes
Not Started6.2.1Synthetic content marked as artificially generated
Partial6.2.2Marking machine-readable where technically feasible
Partial6.2.3Marking interoperable
Partial6.2.4Deep fakes disclosed as such
Partial6.3.1Technical marking implemented
Not Started6.3.2Marking standard used
Not Started6.3.3Marking persistent through distribution
Not Started6.3.4Marking tested for effectiveness
Not Started6.4.1Content assisting in editing (not substantially altering)
Not Started6.4.2Content part of artistic/creative work (clearly labeled)
Not Started6.4.3Authorised by law for crime prevention/detection
N/A7.1.1Affected persons informed of high-risk AI use
Partial7.1.2Notification prior to first exposure
Partial7.1.3Information in accessible format
Partial7.1.4Workplace representatives informed (if employment context)
Partial7.2.1Registration in EU database completed
Partial7.2.2FRIA completed and published summary
Partial8.1.1Transparency information accessible to persons with disabilities
Partial8.1.2Multiple formats available where appropriate
Partial8.1.3Language versions available as required
Partial9.1.1Instructions for use
Not Started9.1.2Transparency disclosure scripts/text
Not Started9.1.3Marking specifications
N/A9.1.4User notification records
Not Started1. Applicability[ ] Complete [ ] Incomplete
Not Started2. High-Risk Transparency[ ] Complete [ ] Incomplete [ ] N/A
Not Started3. AI Interaction[ ] Complete [ ] Incomplete [ ] N/A
Not Started4. Emotion Recognition[ ] Complete [ ] Incomplete [ ] N/A
Not Started5. Biometric Categorization[ ] Complete [ ] Incomplete [ ] N/A
Not Started6. Synthetic Content[ ] Complete [ ] Incomplete [ ] N/A
Not Started7. Deployer Obligations[ ] Complete [ ] Incomplete [ ] N/A
Not Started8. Accessibility[ ] Complete [ ] Incomplete
Not Started9. Documentation[ ] Complete [ ] Incomplete
Not Started