AI
aicomply.
HomeResources
Get Started
Understand

Learn the EU AI Act fundamentals

Learning Modules

Interactive courses

Global Regulations

Compare frameworks

EU AI Act Text

Full regulation

Glossary

Key definitions

FAQ

Common questions

Assess

Evaluate your AI systems

1. Register

Catalog systems

2. Classify

Risk & Role

3. Requirements

View obligations

4. Comply

Close gaps

5. Monitor

Track status

Implement

Build compliance controls

Policies

Governance docs

Standards

Technical specs

Controls

Control library

Procedures

Compliance documents

Templates

Ready-to-use

AI
aicomply.
Home
Understand OverviewLearning ModulesGlobal RegulationsEU AI Act TextGlossaryFAQ
Assess Overview1. Register2. Classify3. Requirements4. Comply5. Monitor
Implement OverviewPoliciesStandardsControlsProceduresTemplates
Resources
GitHubGet Started
ResourcesEU AI ActGPAI Documentation
Annex 11

Annex XI - GPAI Technical Documentation

Documentation requirements for GPAI model providers

Foundation model compliance

Annex XI - Technical Documentation for General-Purpose AI Models

Training Note: This annex specifies the technical documentation requirements for providers of general-purpose AI models, including foundation models and large language models. Critical for organizations developing or deploying GPAI models.


ANNEX XI

Technical documentation referred to in Article 53(1), point (a) — technical documentation for providers of general-purpose AI models

Section 1 — Information to be provided by all providers of general-purpose AI models

The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as appropriate to the size and risk profile of the model:

  1. A general description of the general-purpose AI model including:

    • (a) the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated;
    • (b) the acceptable use policies applicable;
    • (c) the date of release and methods of distribution;
    • (d) the architecture and number of parameters;
    • (e) the modality (e.g. text, image) and format of inputs and outputs;
    • (f) the licence.
  2. A detailed description of the elements of the model referred to in point 1, and relevant information of the process for the development, including the following elements:

    • (a) the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems;
    • (b) the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for and the relevance of the different parameters, as applicable;
    • (c) information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable;
    • (d) the computational resources used to train the model (e.g. number of floating point operations), training time, and other relevant details related to the training;
    • (e) known or estimated energy consumption of the model.

With regard to point (e), where the energy consumption of the model is unknown, the energy consumption may be based on information about computational resources used.

Section 2 — Additional information to be provided by providers of general-purpose AI models with systemic risk

  1. A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics and the methodology on the identification of limitations.

  2. Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or external adversarial testing (e.g. red teaming), model adaptations, including alignment and fine-tuning.

  3. Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into the overall processing.

Previous

Large-Scale IT

Next

GPAI Transparency

AI
aicomply.

Open-source EU AI Act compliance platform. Built by the community, for the community.

Platform

  • Understand
  • Assess
  • Implement
  • Standards Library
  • Controls Library
  • AI Governance Policy

Resources

  • EU AI Act Full Text
  • Glossary
  • FAQ
  • Global AI Regulations
  • Changelog

Community

  • GitHub Discussions
  • Contributing
  • Code of Conduct

© 2026 AI Comply Contributors. Open source under AGPL-3.0 License.

PrivacyTerms